Timezone: »

Information-theoretic analysis of generalization capability of learning algorithms
Aolin Xu · Maxim Raginsky

Tue Dec 05 03:20 PM -- 03:25 PM (PST) @ Hall C

We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The upper bounds provide theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information of a learning algorithm. The results can also be used to analyze the generalization capability of learning algorithms under adaptive composition, and the bias-accuracy tradeoffs in adaptive data analytics. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou.

Author Information

Aolin Xu (University of Illinois at Urbana-Champaign)
Maxim Raginsky (University of Illinois at Urbana-Champaign)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors