Skip to yearly menu bar Skip to main content

Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Leveraging Uniformity of Normalized Embeddings for Sequential Recommendation

Hyunsoo Chung · Jungtaek Kim


Pointwise loss is one of the most widely adopted yet practical choices for training sequential recommendation models. Aside from their successes, only limited studies leverage normalized embeddings in their optimization, which has been actively explored and proven effective in various machine learning fields. However, we observe that the na\"ive adoption of normalization hinders the quality of a learned recommendation policy. In particular, we argue that the clusterization of embeddings on a unit hypersphere triggers such performance degradation. To alleviate this issue, we propose a novel training objective that enforces the uniformity of embeddings while learning the recommendation policy. We empirically validate our method on sequential recommendation tasks and show superior performance improvements compared to other approaches without normalization.

Chat is not available.