Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Second Workshop on Quantum Tensor Networks in Machine Learning

Improvements to gradient descent methods for quantum tensor network machine learning

James Dborin


Abstract:

Tensor networks have demonstrated significant value for machine learning in a myriad of different applications. However, optimizing tensor networks using standard gradient descent has proven to be difficult in practice. Tensor networks suffer from initialization problems resulting in exploding or vanishing gradients and require extensive hyperparameter tuning. Efforts to overcome these problems usually depend on specific network architectures, or ad hoc prescriptions. In this paper we address the problems of initialization and hyperparameter tuning, making it possible to train tensor networks using established machine learning techniques. We introduce a `copy node' method that successfully initializes arbitrary tensor networks, in addition to a gradient based regularization technique for bond dimensions. We present numerical results that show that the combination of techniques presented here produces quantum-inspired tensor network models with far fewer parameters, while improving generalization performance.

Chat is not available.