Poster
Order Optimal One-Shot Distributed Learning
Arsalan Sharifnassab · Saber Salehkaleybar · S. Jamaloddin Golestani
East Exhibition Hall B, C #82
Keywords: [ Optimization ] [ Probabilistic Methods ] [ Distributed Inference ]
[
Abstract
]
Abstract:
We consider distributed statistical optimization in one-shot setting, where there are $m$ machines each observing $n$ i.i.d samples. Based on its observed samples, each machine then sends an $O(\log(mn))$-length message to a server, at which a parameter minimizing an expected loss is to be estimated. We propose an algorithm called Multi-Resolution Estimator (MRE) whose expected error is no larger than $\tilde{O}( m^{-1/\max(d,2)} n^{-1/2})$, where $d$ is the dimension of the parameter space. This error bound meets existing lower bounds up to poly-logarithmic factors, and is thereby order optimal. The expected error of MRE, unlike existing algorithms, tends to zero as the number of machines ($m$) goes to infinity, even when the number of samples per machine ($n$) remains upper bounded by a constant. This property of the MRE algorithm makes it applicable in new machine learning paradigms where $m$ is much larger than $n$.
Live content is unavailable. Log in and register to view live content