Timezone: »

Revisiting $(\epsilon, \gamma, \tau)$-similarity learning for domain adaptation
Sofiane Dhouib · Ievgen Redko

Wed Dec 05 06:45 AM -- 06:50 AM (PST) @ Room 517 CD
Similarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification. This learning scenario has been generally formalized by the means of a $(\epsilon, \gamma, \tau)-$good similarity learning framework in the context of supervised classification and has been shown to have strong theoretical guarantees. In this paper, we propose to extend the theoretical analysis of similarity learning to the domain adaptation setting, a particular situation occurring when the similarity is learned and then deployed on samples following different probability distributions. We give a new definition of an $(\epsilon, \gamma)-$good similarity for domain adaptation and prove several results quantifying the performance of a similarity function on a target domain after it has been trained on a source domain. We particularly show that if the source distribution dominates the target one, then principally new domain adaptation learning bounds can be proved.

Author Information

Sofiane Dhouib (CREATIS UMR CNRS 5220)
Ievgen Redko (Hubert Curien laboratory)

Related Events (a corresponding poster, oral, or spotlight)