Skip to yearly menu bar Skip to main content


Poster

Optimizing affinity-based binary hashing using auxiliary coordinates

Ramin Raziperchikolaei · Miguel A. Carreira-Perpinan

Area 5+6+7+8 #144

Keywords: [ Combinatorial Optimization ] [ (Other) Optimization ] [ Similarity and Distance Learning ] [ (Application) Information Retrieval ] [ Nonlinear Dimension Reduction and Manifold Learning ]


Abstract:

In supervised binary hashing, one wants to learn a function that maps a high-dimensional feature vector to a vector of binary codes, for application to fast image retrieval. This typically results in a difficult optimization problem, nonconvex and nonsmooth, because of the discrete variables involved. Much work has simply relaxed the problem during training, solving a continuous optimization, and truncating the codes a posteriori. This gives reasonable results but is quite suboptimal. Recent work has tried to optimize the objective directly over the binary codes and achieved better results, but the hash function was still learned a posteriori, which remains suboptimal. We propose a general framework for learning hash functions using affinity-based loss functions that uses auxiliary coordinates. This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. The resulting algorithm can be seen as an iterated version of the procedure of optimizing first over the codes and then learning the hash function. Compared to this, our optimization is guaranteed to obtain better hash functions while being not much slower, as demonstrated experimentally in various supervised datasets. In addition, our framework facilitates the design of optimization algorithms for arbitrary types of loss and hash functions.

Live content is unavailable. Log in and register to view live content