Timezone: »
Spotlight
Multiclass Learning Approaches: A Theoretical Comparison with Implications
Amit Daniely · Sivan Sabato · Shai Shalev-Shwartz
Wed Dec 05 10:26 AM -- 10:30 AM (PST) @ Harveys Convention Center Floor, CC
We theoretically analyze and compare the following five popular multiclass classification methods: One vs. All, All Pairs, Tree-based classifiers, Error Correcting Output Codes (ECOC) with randomly generated code matrices, and Multiclass SVM. In the first four methods, the classification is based on a reduction to binary classification. We consider the case where the binary classifier comes from a class of VC dimension $d$, and in particular from the class of halfspaces over $\reals^d$. We analyze both the estimation error and the approximation error of these methods. Our analysis reveals interesting conclusions of practical relevance, regarding the success of the different approaches under various conditions. Our proof technique employs tools from VC theory to analyze the \emph{approximation error} of hypothesis classes. This is in sharp contrast to most, if not all, previous uses of VC theory, which only deal with estimation error.
Author Information
Amit Daniely (Hebrew University and Google Research)
Sivan Sabato (Ben-Gurion University of the Negev)
Shai Shalev-Shwartz (Mobileye & HUJI)
Related Events (a corresponding poster, oral, or spotlight)
-
2012 Poster: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Thu. Dec 6th through Wed the 5th Room Harrah’s Special Events Center 2nd Floor
More from the Same Authors
-
2022 Poster: Knowledge Distillation: Bad Models Can Be Good Role Models »
Gal Kaplun · Eran Malach · Preetum Nakkiran · Shai Shalev-Shwartz -
2021 : Q&A with Shai Shalev-Shwartz »
Shai Shalev-Shwartz -
2021 : Deep Learning: Success, Failure, and the Border between them, Shai Shalev-Shwartz »
Shai Shalev-Shwartz -
2020 Poster: Neural Networks Learning and Memorization with (almost) no Over-Parameterization »
Amit Daniely -
2020 Poster: The Implications of Local Correlation on Learning Some Deep Functions »
Eran Malach · Shai Shalev-Shwartz -
2020 Poster: Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations »
Amit Daniely · Hadas Shacham -
2020 Spotlight: Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations »
Amit Daniely · Hadas Shacham -
2020 Poster: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach -
2020 Poster: Hardness of Learning Neural Networks with Natural Weights »
Amit Daniely · Gal Vardi -
2020 Oral: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach -
2019 Poster: Locally Private Learning without Interaction Requires Separation »
Amit Daniely · Vitaly Feldman -
2019 Poster: Generalization Bounds for Neural Networks via Approximate Description Length »
Amit Daniely · Elad Granot -
2019 Spotlight: Generalization Bounds for Neural Networks via Approximate Description Length »
Amit Daniely · Elad Granot -
2019 Poster: Is Deeper Better only when Shallow is Good? »
Eran Malach · Shai Shalev-Shwartz -
2017 Poster: Decoupling "when to update" from "how to update" »
Eran Malach · Shai Shalev-Shwartz -
2017 Poster: Nearest-Neighbor Sample Compression: Efficiency, Consistency, Infinite Dimensions »
Aryeh Kontorovich · Sivan Sabato · Roi Weiss -
2017 Poster: SGD Learns the Conjugate Kernel Class of the Network »
Amit Daniely -
2016 Poster: Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity »
Amit Daniely · Roy Frostig · Yoram Singer -
2016 Poster: Learning a Metric Embedding for Face Recognition using the Multibatch Method »
Oren Tadmor · Tal Rosenwein · Shai Shalev-Shwartz · Yonatan Wexler · Amnon Shashua -
2015 Poster: Beyond Convexity: Stochastic Quasi-Convex Optimization »
Elad Hazan · Kfir Y. Levy · Shai Shalev-Shwartz -
2014 Poster: Active Regression by Stratification »
Sivan Sabato · Remi Munos -
2014 Poster: On the Computational Efficiency of Training Neural Networks »
Roi Livni · Shai Shalev-Shwartz · Ohad Shamir -
2013 Poster: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai Shalev-Shwartz -
2013 Spotlight: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai Shalev-Shwartz -
2013 Poster: Accelerated Mini-Batch Stochastic Dual Coordinate Ascent »
Shai Shalev-Shwartz · Tong Zhang -
2013 Poster: Auditing: Active Learning with Outcome-Dependent Query Costs »
Sivan Sabato · Anand D Sarwate · Nati Srebro -
2012 Poster: Learning Halfspaces with the Zero-One Loss: Time-Accuracy Tradeoffs »
Aharon Birnbaum · Shai Shalev-Shwartz -
2011 Poster: ShareBoost: Efficient multiclass learning with feature sharing »
Shai Shalev-Shwartz · Yonatan Wexler · Amnon Shashua -
2011 Session: Spotlight Session 4 »
Shai Shalev-Shwartz -
2011 Session: Oral Session 4 »
Shai Shalev-Shwartz -
2010 Poster: Tight Sample Complexity of Large-Margin Learning »
Sivan Sabato · Nati Srebro · Naftali Tishby -
2008 Poster: Fast Rates for Regularized Objectives »
Karthik Sridharan · Shai Shalev-Shwartz · Nati Srebro -
2008 Poster: Mind the Duality Gap: Logarithmic regret algorithms for online optimization »
Shai Shalev-Shwartz · Sham M Kakade -
2008 Spotlight: Mind the Duality Gap: Logarithmic regret algorithms for online optimization »
Shai Shalev-Shwartz · Sham M Kakade -
2006 Poster: Online Classification for Complex Problems Using Simultaneous Projections »
Yonatan Amit · Shai Shalev-Shwartz · Yoram Singer -
2006 Poster: Convex Repeated Games and Fenchel Duality »
Shai Shalev-Shwartz · Yoram Singer -
2006 Spotlight: Convex Repeated Games and Fenchel Duality »
Shai Shalev-Shwartz · Yoram Singer