Timezone: »
Hierarchical clustering is a data analysis method that has been used for decades. Despite its widespread use, the method has an underdeveloped analytical foundation. Having a well understood foundation would both support the currently used methods and help guide future improvements. The goal of this paper is to give an analytic framework to better understand observations seen in practice. This paper considers the dual of a problem framework for hierarchical clustering introduced by Dasgupta. The main result is that one of the most popular algorithms used in practice, average linkage agglomerative clustering, has a small constant approximation ratio for this objective. Furthermore, this paper establishes that using bisecting k-means divisive clustering has a very poor lower bound on its approximation ratio for the same objective. However, we show that there are divisive algorithms that perform well with respect to this objective by giving two constant approximation algorithms. This paper is some of the first work to establish guarantees on widely used hierarchical algorithms for a natural objective function. This objective and analysis give insight into what these popular algorithms are optimizing and when they will perform well.
Author Information
Benjamin Moseley (Carnegie Mellon University)
Joshua Wang (Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Oral: Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting K-means, and Local Search »
Tue. Dec 5th 07:10 -- 07:25 PM Room Hall A
More from the Same Authors
-
2022 Poster: Algorithms with Prediction Portfolios »
Michael Dinitz · Sungjin Im · Thomas Lavastida · Benjamin Moseley · Sergei Vassilvitskii -
2021 : AI workloads inside databases »
Guy Van den Broeck · Alexander Ratner · Benjamin Moseley · Konstantinos Karanasos · Parisa Kordjamshidi · Molham Aref · Arun Kumar -
2021 Poster: Robust Online Correlation Clustering »
Silvio Lattanzi · Benjamin Moseley · Sergei Vassilvitskii · Yuyan Wang · Rudy Zhou -
2021 Poster: Margin-Independent Online Multiclass Learning via Convex Geometry »
Guru Guruganesh · Allen Liu · Jon Schneider · Joshua Wang -
2021 Oral: Faster Matchings via Learned Duals »
Michael Dinitz · Sungjin Im · Thomas Lavastida · Benjamin Moseley · Sergei Vassilvitskii -
2021 Poster: Faster Matchings via Learned Duals »
Michael Dinitz · Sungjin Im · Thomas Lavastida · Benjamin Moseley · Sergei Vassilvitskii -
2020 Poster: Fair Hierarchical Clustering »
Sara Ahmadian · Alessandro Epasto · Marina Knittel · Ravi Kumar · Mohammad Mahdian · Benjamin Moseley · Philip Pham · Sergei Vassilvitskii · Yuyan Wang -
2019 Poster: Efficient Rematerialization for Deep Networks »
Ravi Kumar · Manish Purohit · Zoya Svitkina · Erik Vee · Joshua Wang -
2019 Poster: Backprop with Approximate Activations for Memory-efficient Network Training »
Ayan Chakrabarti · Benjamin Moseley -
2019 Poster: Cost Effective Active Search »
Shali Jiang · Roman Garnett · Benjamin Moseley -
2018 Poster: Efficient nonmyopic batch active search »
Shali Jiang · Gustavo Malkomes · Matthew Abbott · Benjamin Moseley · Roman Garnett -
2018 Spotlight: Efficient nonmyopic batch active search »
Shali Jiang · Gustavo Malkomes · Matthew Abbott · Benjamin Moseley · Roman Garnett -
2018 Poster: Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization »
Rad Niazadeh · Tim Roughgarden · Joshua Wang -
2018 Oral: Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization »
Rad Niazadeh · Tim Roughgarden · Joshua Wang