Timezone: »

On Multiplicative Multitask Feature Learning
Xin Wang · Jinbo Bi · Shipeng Yu · Jiangwen Sun

Thu Dec 11 11:00 AM -- 03:00 PM (PST) @ Level 2, room 210D #None

We investigate a general framework of multiplicative multitask feature learning which decomposes each task's model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods have been proposed as special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effect. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. Empirical studies have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks.

Author Information

Xin Wang (University of Connecticut)
Jinbo Bi (University of Connecticut)
Shipeng Yu (Siemens)
Jiangwen Sun (University of Connecticut)

More from the Same Authors