Timezone: »

Multitask learning meets tensor factorization: task imputation via convex optimization
Kishan Wimalawarne · Masashi Sugiyama · Ryota Tomioka

Thu Dec 11 11:00 AM -- 03:00 PM (PST) @ Level 2, room 210D

We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank.

Author Information

Kishan Wimalawarne (Tokyo Institute of Technology)
Masashi Sugiyama (RIKEN / University of Tokyo)
Ryota Tomioka (Microsoft Research AI4Science)

More from the Same Authors