Timezone: »
Quantum tensor networks in machine learning (QTNML) are envisioned to have great potential to advance AI technologies. Quantum machine learning promises quantum advantages (potentially exponential speedups in training, quadratic speedup in convergence, etc.) over classical machine learning, while tensor networks provide powerful simulations of quantum machine learning algorithms on classical computers. As a rapidly growing interdisciplinary area, QTNML may serve as an amplifier for computational intelligence, a transformer for machine learning innovations, and a propeller for AI industrialization.
Tensor networks, a contracted network of factor tensors, have arisen independently in several areas of science and engineering. Such networks appear in the description of physical processes and an accompanying collection of numerical techniques have elevated the use of quantum tensor networks into a variational model of machine learning. Underlying these algorithms is the compression of high-dimensional data needed to represent quantum states of matter. These compression techniques have recently proven ripe to apply to many traditional problems faced in deep learning. Quantum tensor networks have shown significant power in compactly representing deep neural networks, and efficient training and theoretical understanding of deep neural networks. More potential QTNML technologies are rapidly emerging, such as approximating probability functions, and probabilistic graphical models. However, the topic of QTNML is relatively young and many open problems are still to be explored.
Quantum algorithms are typically described by quantum circuits (quantum computational networks). These networks are indeed a class of tensor networks, creating an evident interplay between classical tensor network contraction algorithms and executing tensor contractions on quantum processors. The modern field of quantum enhanced machine learning has started to utilize several tools from tensor network theory to create new quantum models of machine learning and to better understand existing ones.
The interplay between tensor networks, machine learning and quantum algorithms is rich. Indeed, this interplay is based not just on numerical methods but on the equivalence of tensor networks to various quantum circuits, rapidly developing algorithms from the mathematics and physics communities for optimizing and transforming tensor networks, and connections to low-rank methods for learning. A merger of tensor network algorithms with state-of-the-art approaches in deep learning is now taking place. A new community is forming, which this workshop aims to foster.
Fri 6:00 a.m. - 6:05 a.m.
|
Opening Remarks
(
Opening
)
A short introduction |
Xiao-Yang Liu 🔗 |
Fri 6:05 a.m. - 6:35 a.m.
|
Invited Talk 1: Tensor Networks as a Data Structure in Probabilistic Modeling and for Learning Dynamical Laws from Data
(
Talk
)
SlidesLive Video » Recent years have enjoyed a significant interest in exploiting tensor networks in the context of machine learning, both as a tool for the formulation of new learning algorithms and for enhancing the mathematical understanding of existing methods. In this talk, we will explore two readings of such a connection. On the one hand, we will consider the task of identifying the underlying non-linear governing equations, required both for obtaining an understanding and making future predictions. We will see that this problem can be addressed in a scalable way making use of tensor network based parameterizations for the governing equations. On the other hand, we will investigate the expressive power of tensor networks in probabilistic modelling. Inspired by the connection of tensor networks and machine learning, and the natural correspondence between tensor networks and probabilistic graphical models, we will provide a rigorous analysis of the expressive power of various tensor-network factorizations of discrete multivariate probability distributions. Joint work with A. Goeßmann, M. Götte, I. Roth, R. Sweke, G. Kutyniok, I. Glasser, N. Pancotti, J. I. Cirac. |
Jens Eisert 🔗 |
Fri 6:35 a.m. - 6:45 a.m.
|
Invited Talk 1 Q&A by Jens
(
Q&A
)
|
Jens Eisert 🔗 |
Fri 6:45 a.m. - 7:17 a.m.
|
Invited Talk 2: Expressiveness in Deep Learning via Tensor Networks and Quantum Entanglement
(
Talk
)
SlidesLive Video » Understanding deep learning calls for addressing three fundamental questions: expressiveness, optimization and generalization. This talk will describe a series of works aimed at unraveling some of the mysteries behind expressiveness. I will begin by showing that state of the art deep learning architectures, such as convolutional networks, can be represented as tensor networks --- a prominent computational model for quantum many-body simulations. This connection will inspire the use of quantum entanglement for defining measures of data dependencies modeled by deep networks. Next, I will turn to derive a quantum max-flow / min-cut theorem characterizing the entanglement captured by deep networks. The theorem will give rise to new results that shed light on expressiveness in deep learning, and in addition, provide new tools for deep network design. Works covered in the talk were in collaboration with Yoav Levine, Or Sharir, Ronen Tamari, David Yakira and Amnon Shashua. |
Nadav Cohen 🔗 |
Fri 7:17 a.m. - 7:25 a.m.
|
Invited Talk 2 Q&A by Cohen
(
Q&A
)
|
Nadav Cohen 🔗 |
Fri 7:25 a.m. - 7:55 a.m.
|
Invited Talk 3: Tensor Networks and Counting Problems on the Lattice
(
Talk
)
SlidesLive Video » An overview will be given of counting problems on the lattice, such as the calculation of the hard square constant and of the residual entropy of ice. Unlike Monte Carlo techniques which have difficulty in calculating such quantities, we will demonstrate that tensor networks provide a natural framework for tackling these problems. We will also show that tensor networks reveal nonlocal hidden symmetries in those systems, and that the typical critical behaviour is witnessed by matrix product operators which form representations of tensor fusion categories. |
Frank Verstraete 🔗 |
Fri 7:55 a.m. - 8:05 a.m.
|
Invited Talk 3 Q&A by Frank
(
Q&A
)
|
Frank Verstraete 🔗 |
Fri 8:05 a.m. - 8:50 a.m.
|
Invited Talk 4: Quantum in ML and ML in Quantum
(
Talk
)
SlidesLive Video » In this talk, I will cover recent results in two areas: 1) Using quantum-inspired methods in machine learning, including using low-entanglement states (matrix product states/tensor train decompositions) for different regression and classification tasks. 2) Using machine learning methods for efficient classical simulation of quantum systems. I will cover our results on simulating quantum circuits on parallel computers using graph-based algorithms, and also efficient numerical methods for optimization using tensor-trains for the computational of large number (up to B=100) on GPUs. The code is a combination of classical linear algebra algorithms, Riemannian optimization methods and efficient software implementation in TensorFlow.
|
Ivan Oseledets 🔗 |
Fri 8:50 a.m. - 9:00 a.m.
|
Invited Talk 4 Q&A by Ivan
(
Q&A
)
|
Ivan Oseledets 🔗 |
Fri 9:00 a.m. - 9:40 a.m.
|
Invited Talk 5: Live Presentation of TensorLy By Jean Kossaifi
(
Talk
)
Live Presentation |
Animashree Anandkumar · Jean Kossaifi 🔗 |
Fri 9:40 a.m. - 10:07 a.m.
|
Invited Talk 6: A Century of the Tensor Network Formulation from the Ising Model
(
Talk
)
SlidesLive Video » A hundred years have passed since Ising model was proposed by Lenz in 1920. One finds that the square lattice Ising model is already an example of two-dimensional tensor network (TN), which is formed by contracting 4-leg tensors. In 1941, Kramers and Wannier assumed a variational state in the form of the matrix product state (MPS), and they optimized it `numerically'. Baxter reached the concept of the corner-transfer matrix (CTM), and performed a variational computation in 1968. Independently from these statistical studies, MPS was introduced by Affleck, Lieb, Kennedy and Tasaki (AKLT) in 1987 for the study of one-dimensional quantum spin chain, by Derrida for asymetric exclusion processes, and also (implicitly) by the establishment of the density matrix renormalization group (DMRG) by White in 1992. After a brief (?) introduction of these prehistories, I'll speak about my contribution to this area, the applications of DMRG and CTMRG methods to two-dimensional statistical models, including those on hyperbolic lattices, fractal systems, and random spin models. Analysis of the spin-glass state, which is related to learning processes, from the view point of the entanglement structure would be a target of future studies in this direction. |
Tomotoshi Nishino 🔗 |
Fri 10:07 a.m. - 10:15 a.m.
|
Invited Talk 6 Q&A by Tomotoshi
(
Q&A
)
|
Tomotoshi Nishino 🔗 |
Fri 10:15 a.m. - 10:18 a.m.
|
Poster 1: Multi-Graph Tensor Networks by Yao Lei Xu
(
Poster Talk
)
|
Yao Lei Xu 🔗 |
Fri 10:18 a.m. - 10:21 a.m.
|
Poster 2: High Performance Single-Site Finite DMRG on GPUs by Hao Hong
(
Poster Talk
)
|
Hong Hao 🔗 |
Fri 10:21 a.m. - 10:24 a.m.
|
Poster 3: Variational Quantum Circuit Model for Knowledge Graph Embeddings by Yunpu Ma
(
Poster Talk
)
|
Yunpu Ma 🔗 |
Fri 10:24 a.m. - 10:27 a.m.
|
Poster 4: Hybrid quantum-classical classifier based on tensor network and variational quantum circuit by Samuel Yen-Chi Chen
(
Poster Talk
)
|
Yen-Chi Chen 🔗 |
Fri 10:27 a.m. - 10:30 a.m.
|
Poster 5: A Neural Matching Model based on Quantum Interference and Quantum Many-body System
(
Poster Talk
)
|
Hui Gao 🔗 |
Fri 10:30 a.m. - 10:40 a.m.
|
Contributed Talk 1: Paper 3: Tensor network approaches for data-driven identification of non-linear dynamical laws
(
Talk
)
SlidesLive Video » To date, scalable methods for data-driven identification of non-linear governing equations do not exploit or offer insight into fundamental underlying physical structure. In this work, we show that various physical constraints can be captured via tensor network based parameterizations for the governing equation, which naturally ensures scalability. In addition to providing analytic results motivating the use of such models for realistic physical systems, we demonstrate that efficient rank-adaptive optimization algorithms can be used to learn optimal tensor network models without requiring a~priori knowledge of the exact tensor ranks. |
Alex Goeßmann 🔗 |
Fri 10:40 a.m. - 10:50 a.m.
|
Contributed Talk 2: Paper 6: Anomaly Detections with Tensor Networks
(
Talk
)
SlidesLive Video » Originating from condensed matter physics, tensor networks are compact representations of high-dimensional tensors. In this paper, the prowess of tensor networks is demonstrated on the particular task of one-class anomaly detection. We exploit the memory and computational efficiency of tensor networks to learn a linear transformation over a space with dimension exponential in the number of original features. The linearity of our model enables us to ensure a tight fit around training instances by penalizing the model's global tendency to predict normality via its Frobenius norm---a task that is infeasible for most deep learning models. Our method outperforms deep and classical algorithms on tabular datasets and produces competitive results on image datasets, despite not exploiting the locality of images. |
Jinhui Wang 🔗 |
Fri 10:50 a.m. - 11:00 a.m.
|
Contributed Talk 3: Paper 32: High-order Learning Model via Fractional Tensor Network Decomposition
(
Talk
)
SlidesLive Video »
We consider high-order learning models, of which the weight tensor is represented by (symmetric) tensor network~(TN) decomposition. Although such models have been widely used on various tasks, it is challenging to determine the optimal order in complex systems (e.g., deep neural networks). To tackle this issue, we introduce a new notion of \emph{fractional tensor network~(FrTN)} decomposition, which generalizes the conventional TN models with an integer order by allowing the order to be an arbitrary fraction. Due to the density of fractions in the field of real numbers, the order of the model can be formulated as a learnable parameter and simply optimized by stochastic gradient descent~(SGD) and its variants. Moreover, it is uncovered that FrTN strongly connects to well-known methods such as $\ell_p$-pooling~\cite{gulcehre2014learned} and ``squeeze-and-excitation''~\cite{hu2018squeeze} operations in the deep learning studies. On the numerical side, we apply the proposed model to enhancing the classic ResNet-26/50~\cite{he2016deep} and MobileNet-v2~\cite{sandler2018mobilenetv2} on both CIFAR-10 and ILSVRC-12 classification tasks, and the results demonstrate the effectiveness brought by the learnable order parameters in FrTN.
|
Chao Li 🔗 |
Fri 11:00 a.m. - 11:45 a.m.
|
Panel Discussion 1: Theoretical, Algorithmic and Physical
(
Discussion Pannel
)
Theoretical, Algorithmic and Physical Discussions of Quantum Tensor Networks in Machine Learning. |
Jacob Biamonte · Ivan Oseledets · Jens Eisert · Nadav Cohen · Guillaume Rabusseau · Xiao-Yang Liu 🔗 |
Fri 11:45 a.m. - 12:00 p.m.
|
Break
|
🔗 |
Fri 12:00 p.m. - 12:45 p.m.
|
Panel Discussion 2: Software and High Performance Implementation
(
Discussion Pannel
)
Software and High Performance Implementation discussion of Quantum Tensor Networks in Machine Learning. |
Glen Evenbly · Martin Ganahl · Paul Springer · Xiao-Yang Liu 🔗 |
Fri 12:45 p.m. - 1:00 p.m.
|
Break
|
🔗 |
Fri 1:00 p.m. - 1:28 p.m.
|
Invited Talk 7: cuTensor: High-Performance CUDA Tensor Primitives
(
Talk
)
SlidesLive Video » This talk discusses cuTENSOR, a high-performance CUDA library for tensor operations that efficiently handles the ubiquitous presence of high-dimensional arrays (i.e., tensors) in today's HPC and DL workloads. This library supports highly efficient tensor operations such as tensor contractions, element-wise tensor operations such as tensor permutations, and tensor reductions. While providing high performance, cuTENSOR also enables users to express their mathematical equations for tensors in a straightforward way that hides the complexity of dealing with these high-dimensional objects behind an easy-to-use API. |
Paul Springer 🔗 |
Fri 1:28 p.m. - 1:35 p.m.
|
Invited Talk 7 Q&A by Paul
(
Q&A
)
|
Paul Springer 🔗 |
Fri 1:35 p.m. - 2:05 p.m.
|
Invited Talk 8: TensorNetwork: A Python Package for Tensor Network Computations
(
Talk
)
SlidesLive Video » TensorNetwork is an open source python package for tensor network computations. It has been designed with the goal in mind to help researchers and engineers with rapid development of highly efficient tensor network algorithms for physics and machine learning applications. After a brief introduction to tensor networks, I will discuss some of the main design principles of the TensorNetwork package, and show how one can use it to speed up tensor network algorithms by running them on accelerated hardware, or by exploiting tensor sparsity. |
Martin Ganahl 🔗 |
Fri 2:05 p.m. - 2:15 p.m.
|
Invited Talk 8 Q&A by Martin
(
Q&A
)
|
Martin Ganahl 🔗 |
Fri 2:15 p.m. - 2:51 p.m.
|
Invited Talk 9: Tensor Network Models for Structured Data
(
Talk
)
SlidesLive Video » In this talk, I will present uniform tensor network models (also known translation invariant tensor networks) which are particularly suited for modelling structured data such as sequences and trees. Uniform tensor networks are tensor networks where the core tensors appearing in the decomposition of a given tensor are all equal, which can be seen as a weight sharing mechanism in tensor networks. In the first part of the talk, I will show how uniform tensor networks are particularly suited to represent functions defined over sets of structured objects such as sequences and trees. I will then present how these models are related to classical computational models such as hidden Markov models, weighted automata, second-order recurrent neural networks and context free grammars. In the second part of the talk, I will present a classical learning algorithm for weighted automata and show how and it can be interpreted as a mean to convert non-uniform tensor networks to uniform ones. Lastly, I will present ongoing work leveraging the tensor network formalism to design efficient and versatile probabilistic models for sequence data. |
Guillaume Rabusseau 🔗 |
Fri 2:51 p.m. - 3:00 p.m.
|
Invited Talk 9 Q&A by Guillaume
(
Q&A
)
|
Guillaume Rabusseau 🔗 |
Fri 3:00 p.m. - 3:30 p.m.
|
Invited Talk 10: Getting Started with Tensor Networks
(
Talk
)
SlidesLive Video » I will provide an overview of the tensor network formalism and its applications, and discuss the key operations, such as tensor contractions, required for building tensor network algorithms. I will also demonstrate the TensorTrace graphical interface, a software tool which is designed to allow users to implement and code tensor network routines easily and effectively. Finally, the utility of tensor networks towards tasks in machine learning will be briefly discussed. |
Glen Evenbly 🔗 |
Fri 3:30 p.m. - 3:40 p.m.
|
Invited Talk 10 Q&A by Evenbly
(
Q&A
)
|
Glen Evenbly 🔗 |
Fri 3:40 p.m. - 3:50 p.m.
|
Contributed Talk 4: Paper 27: Limitations of gradient-based Born Machine over tensornetworks on learning quantum nonlocality
(
Talk
)
SlidesLive Video » Nonlocality is an important constituent of quantum physics which lies at the heart of many striking features of quantum states such as entanglement. An important category of highly entangled quantum states are Greenberger-Horne-Zeilinger (GHZ) states which play key roles in various quantum-based technologies and are particularly of interest in benchmarking noisy quantum hardwares. A novel quantum inspired generative model known as Born Machine which leverages on probabilistic nature of quantum physics has shown a great success in learning classical and quantum data over tensor network (TN) architecture. To this end, we investigate the task of training the Born Machine for learning the GHZ state over two different architectures of tensor networks. Our result indicates that gradient-based training schemes over TN Born Machine fails to learn the non-local information of the coherent superposition (or parity) of the GHZ state. This leads to an important question of what kind of architecture design, initialization and optimization schemes would be more suitable to learn the non-local information hidden in the quantum state and whether we can adapt quantum-inspired training algorithms to learn such quantum states. |
Khadijeh Najafi 🔗 |
Fri 3:50 p.m. - 4:00 p.m.
|
Contributed Talk 5: Paper 19: Deep convolutional tensor network
(
Talk
)
SlidesLive Video » Neural networks have achieved state of the art results in many areas, supposedly due to parameter sharing, locality, and depth. Tensor networks (TNs) are linear algebraic representations of quantum many-body states based on their entanglement structure. TNs have found use in machine learning. We devise a novel TN based model called Deep convolutional tensor network (DCTN) for image classification, which has parameter sharing, locality, and depth. It is based on the Entangled plaquette states (EPS) TN. We show how EPS can be implemented as a backpropagatable layer. We test DCTN on MNIST, FashionMNIST, and CIFAR10 datasets. A shallow DCTN performs well on MNIST and FashionMNIST and has a small parameter count. Unfortunately, depth increases overfitting and thus decreases test accuracy. Also, DCTN of any depth performs badly on CIFAR10 due to overfitting. It is to be determined why. We discuss how the hyperparameters of DCTN affect its training and overfitting. |
Philip Blagoveschensky 🔗 |
Fri 4:00 p.m. - 4:04 p.m.
|
Poster 6: Paper 16: Quantum Tensor Networks for Variational Reinforcement Learning
(
Poster Talk
)
|
Yiming Fang 🔗 |
Fri 4:04 p.m. - 4:07 p.m.
|
Poster 7: Paper 13: Quantum Tensor Networks, Stochastic Processes, and Weighted Automata
(
Poster Talk
)
|
Sandesh Adhikary 🔗 |
Fri 4:07 p.m. - 4:10 p.m.
|
Poster 8: Paper 24: Modeling Natural Language via Quantum Many-body Wave Function and Tensor Network,
(
Poster Talk
)
|
YITONG YAO 🔗 |
Fri 4:10 p.m. - 4:32 p.m.
|
Invited Talk 11: Tensor Methods for Efficient and Interpretable Spatiotemporal Learning
(
Talk
)
SlidesLive Video » Multivariate spatiotemporal data is ubiquitous in science and engineering, from climate science to sports analytics, to neuroscience. Such data contain higher-order correlations and can be represented as a tensor. Tensor latent factor models provide a powerful tool for reducing dimensionality and discovering higher-order structures. However, existing tensor models are often slow or fail to yield interpretable latent factors. In this talk, I will demonstrate advances in tensor methods to generate interpretable latent factors for high-dimensional spatiotemporal data. We provide theoretical guarantees and demonstrate their applications to real-world climate, basketball, and neuroscience data. |
Rose Yu 🔗 |
Fri 4:32 p.m. - 4:40 p.m.
|
Invited Talk 11 Q&A by Rose
(
Q&A
)
|
Rose Yu 🔗 |
Fri 4:40 p.m. - 5:10 p.m.
|
Invited Talk 12: Learning Quantum Channels with Tensor Networks
(
Talk
)
SlidesLive Video » We present a new approach to quantum process tomography, the reconstruction of an unknown quantum channel from measurement data. Specifically, we combine a tensor-network representation of the Choi matrix (a complete description of a quantum channel), with unsupervised machine learning of single-shot projective measurement data. We show numerical experiments for both unitary and noisy quantum circuits, for a number of qubits well beyond the reach of standard process tomography techniques. |
Giacomo Torlai 🔗 |
Fri 5:10 p.m. - 5:20 p.m.
|
Invited Talk 12: Q&A
(
Q&A
)
|
Giacomo Torlai 🔗 |
Fri 5:20 p.m. - 5:25 p.m.
|
Closing Remarks
(
Talk
)
TBD |
Xiao-Yang Liu 🔗 |
Author Information
Xiao-Yang Liu (Columbia University)
Qibin Zhao (RIKEN AIP)
Jacob Biamonte (Skolkovo Institute of Science and Technology)
Cesar F Caiafa (CONICET/UBA)
Paul Pu Liang (Carnegie Mellon University)
Nadav Cohen (Tel Aviv University)
Stefan Leichenauer (X, The Moonshot Factory)
More from the Same Authors
-
2020 : Learning in Low Resource Modalities via Cross-Modal Generalization »
Paul Pu Liang -
2021 : MultiBench: Multiscale Benchmarks for Multimodal Representation Learning »
Paul Pu Liang · Yiwei Lyu · Xiang Fan · Zetian Wu · Yun Cheng · Jason Wu · Leslie (Yufan) Chen · Peter Wu · Michelle A. Lee · Yuke Zhu · Ruslan Salakhutdinov · Louis-Philippe Morency -
2021 Spotlight: Continuous vs. Discrete Optimization of Deep Neural Networks »
Omer Elkabetz · Nadav Cohen -
2021 : GPU-Podracer: Scalable and Elastic Library for Cloud-Native Deep Reinforcement Learning »
Xiao-Yang Liu · Zhuoran Yang · Zhaoran Wang · Anwar Walid · Jian Guo · Michael Jordan -
2021 : Bayesian Tensor Networks »
Kriton Konstantinidis · Yao Lei Xu · Qibin Zhao · Danilo Mandic -
2021 : Bayesian Latent Factor Model for Higher-order Data: an Extended Abstract »
Zerui Tao · Xuyang ZHAO · Toshihisa Tanaka · Qibin Zhao -
2021 : Is Rank Minimization of the Essence to Learn Tensor Network Structure? »
Chao Li · Qibin Zhao -
2021 : Graph-Tensor Singular Value Decomposition for Data Recovery »
Lei Deng · Haifeng Zheng · Xiao-Yang Liu -
2021 : High Performance Hierarchical Tucker Tensor Learning Using GPU Tensor Cores »
hao huang · Xiao-Yang Liu · Weiqin Tong · Tao Zhang · Anwar Walid -
2021 : Fully-Connected Tensor Network Decomposition »
Yu-Bang Zheng · Ting-Zhu Huang · Xi-Le Zhao · Qibin Zhao · Tai-Xiang Jiang -
2021 : Codee: A Tensor Embedding Scheme for Binary Code Search »
Jia Yang · Cai Fu · Xiao-Yang Liu -
2021 : Deep variational reinforcement learning by optimizing Hamiltonian equation »
Zeliang Zhang · Xiao-Yang Liu -
2021 : Spectral Tensor Layer for Model-Parallel Deep Neural Networks »
Zhiyuan Wang · Xiao-Yang Liu -
2022 : Learning More Effective Cell Representations Efficiently »
Jason Xiaotian Dou · Minxue Jia · Nika Zaslavsky · Haiyi Mao · Runxue Bao · Ni Ke · Paul Pu Liang · Zhi-Hong Mao -
2022 : MultiViz: Towards Visualizing and Understanding Multimodal Models »
Paul Pu Liang · · Gunjan Chhablani · Nihal Jain · Zihao Deng · Xingbo Wang · Louis-Philippe Morency · Ruslan Salakhutdinov -
2022 : Nano: Nested Human-in-the-Loop Reward Learning for Controlling Distribution of Generated Text »
Xiang Fan · · Paul Pu Liang · Ruslan Salakhutdinov · Louis-Philippe Morency -
2023 Poster: Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks »
Andong Wang · Chao Li · Mingyuan Bai · Zhong Jin · Guoxu Zhou · Qibin Zhao -
2023 Poster: Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals »
Yue Wu · Yewen Fan · Paul Pu Liang · Amos Azaria · Yuanzhi Li · Tom Mitchell -
2023 Poster: On the Ability of Graph Neural Networks to Model Interactions Between Vertices »
Noam Razin · Tom Verbin · Nadav Cohen -
2023 Poster: Factorized Contrastive Learning: Going Beyond Multi-view Redundancy »
Paul Pu Liang · Zihao Deng · Martin Ma · James Zou · Louis-Philippe Morency · Ruslan Salakhutdinov -
2023 Poster: Localized Symbolic Knowledge Distillation for Visual Commonsense Models »
Jae Sung Park · Jack Hessel · Khyathi Chandu · Paul Pu Liang · Ximing Lu · Qiuyuan Huang · Peter West · Jianfeng Gao · Ali Farhadi · Yejin Choi -
2023 Poster: Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework »
Paul Pu Liang · Yun Cheng · Xiang Fan · Chun Kai Ling · Suzanne Nie · Richard Chen · Zihao Deng · Nicholas Allen · Randy Auerbach · Faisal Mahmood · Russ Salakhutdinov · Louis-Philippe Morency -
2023 Poster: What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement. »
Yotam Alexander · Nimrod De La Vega · Noam Razin · Nadav Cohen -
2023 Poster: Undirected Probabilistic Model for Tensor Decomposition »
Zerui Tao · Toshihisa Tanaka · Qibin Zhao -
2023 Poster: Classical Simulation of Quantum Circuits: Parallel Environments and Benchmark »
Xiao-Yang Liu · Zeliang Zhang -
2022 Poster: SPD domain-specific batch normalization to crack interpretable unsupervised domain adaptation in EEG »
Reinmar Kobler · Jun-ichiro Hirayama · Qibin Zhao · Motoaki Kawanabe -
2022 Poster: Homomorphic Matrix Completion »
Xiao-Yang Liu · Zechu (Steven) Li · Xiaodong Wang -
2022 Poster: FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning »
Xiao-Yang Liu · Ziyi Xia · Jingyang Rui · Jiechao Gao · Hongyang Yang · Ming Zhu · Christina Wang · Zhaoran Wang · Jian Guo -
2021 : Discussion Pannel »
Xiao-Yang Liu · Qibin Zhao · Chao Li · Guillaume Rabusseau -
2021 : Bayesian Tensor Networks »
Kriton Konstantinidis · Yao Lei Xu · Qibin Zhao · Danilo Mandic -
2021 : Nadav Cohen »
Nadav Cohen -
2021 : Implicit Regularization in Quantum Tensor Networks »
Nadav Cohen -
2021 : High Performance Computation for Tensor Networks Learning »
Anwar Walid · Xiao-Yang Liu -
2021 Workshop: Second Workshop on Quantum Tensor Networks in Machine Learning »
Xiao-Yang Liu · Qibin Zhao · Ivan Oseledets · Yufei Ding · Guillaume Rabusseau · Jean Kossaifi · Khadijeh Najafi · Anwar Walid · Andrzej Cichocki · Masashi Sugiyama -
2021 : Opening Remarks »
Xiao-Yang Liu -
2021 Poster: Continuous vs. Discrete Optimization of Deep Neural Networks »
Omer Elkabetz · Nadav Cohen -
2020 : Closing Remarks »
Xiao-Yang Liu -
2020 : Panel Discussion 2: Software and High Performance Implementation »
Glen Evenbly · Martin Ganahl · Paul Springer · Xiao-Yang Liu -
2020 : Panel Discussion 1: Theoretical, Algorithmic and Physical »
Jacob Biamonte · Ivan Oseledets · Jens Eisert · Nadav Cohen · Guillaume Rabusseau · Xiao-Yang Liu -
2020 : Invited Talk 2 Q&A by Cohen »
Nadav Cohen -
2020 : Invited Talk 2: Expressiveness in Deep Learning via Tensor Networks and Quantum Entanglement »
Nadav Cohen -
2020 : Opening Remarks »
Xiao-Yang Liu -
2020 Poster: Implicit Regularization in Deep Learning May Not Be Explainable by Norms »
Noam Razin · Nadav Cohen -
2019 : Extended Poster Session »
Travis LaCroix · Marie Ossenkopf · Mina Lee · Nicole Fitzgerald · Daniela Mihai · Jonathon Hare · Ali Zaidi · Alexander Cowen-Rivers · Alana Marzoev · Eugene Kharitonov · Luyao Yuan · Tomasz Korbak · Paul Pu Liang · Yi Ren · Roberto Dessì · Peter Potash · Shangmin Guo · Tatsunori Hashimoto · Percy Liang · Julian Zubek · Zipeng Fu · Song-Chun Zhu · Adam Lerer -
2019 : Coffee + Posters »
Changhao Chen · Nils Gählert · Edouard Leurent · Johannes Lehner · Apratim Bhattacharyya · Harkirat Singh Behl · Teck Yian Lim · Shiho Kim · Jelena Novosel · Błażej Osiński · Arindam Das · Ruobing Shen · Jeffrey Hawke · Joachim Sicking · Babak Shahian Jahromi · Theja Tulabandhula · Claudio Michaelis · Evgenia Rusak · WENHANG BAO · Hazem Rashed · JP Chen · Amin Ansari · Jaekwang Cha · Mohamed Zahran · Daniele Reda · Jinhyuk Kim · Kim Dohyun · Ho Suk · Junekyo Jhung · Alexander Kister · Matthias Fahrland · Adam Jakubowski · Piotr Miłoś · Jean Mercat · Bruno Arsenali · Silviu Homoceanu · Xiao-Yang Liu · Philip Torr · Ahmad El Sallab · Ibrahim Sobh · Anurag Arnab · Krzysztof Galias -
2019 Poster: Implicit Regularization in Deep Matrix Factorization »
Sanjeev Arora · Nadav Cohen · Wei Hu · Yuping Luo -
2019 Spotlight: Implicit Regularization in Deep Matrix Factorization »
Sanjeev Arora · Nadav Cohen · Wei Hu · Yuping Luo -
2019 Poster: Deep Multimodal Multilinear Fusion with High-order Polynomial Pooling »
Ming Hou · Jiajia Tang · Jianhai Zhang · Wanzeng Kong · Qibin Zhao -
2019 Poster: Learning Macroscopic Brain Connectomes via Group-Sparse Factorization »
Farzane Aminmansour · Andrew Patterson · Lei Le · Yisu Peng · Daniel Mitchell · Franco Pestilli · Cesar F Caiafa · Russell Greiner · Martha White -
2019 Poster: Deep Gamblers: Learning to Abstain with Portfolio Theory »
Liu Ziyin · Zhikang Wang · Paul Pu Liang · Russ Salakhutdinov · Louis-Philippe Morency · Masahito Ueda -
2018 : Poster Session »
Sujay Sanghavi · Vatsal Shah · Yanyao Shen · Tianchen Zhao · Yuandong Tian · Tomer Galanti · Mufan Li · Gilad Cohen · Daniel Rothchild · Aristide Baratin · Devansh Arpit · Vagelis Papalexakis · Michael Perlmutter · Ashok Vardhan Makkuva · Pim de Haan · Yingyan Lin · Wanmo Kang · Cheolhyoung Lee · Hao Shen · Sho Yaida · Dan Roberts · Nadav Cohen · Philippe Casgrain · Dejiao Zhang · Tengyu Ma · Avinash Ravichandran · Julian Emilio Salazar · Bo Li · Davis Liang · Christopher Wong · Glen Bigan Mbeng · Animesh Garg -
2018 : Coffee break + posters 2 »
Jan Kremer · Erik McDermott · Brandon Carter · Albert Zeyer · Andreas Krug · Paul Pu Liang · Katherine Lee · Dominika Basaj · Abelino Jimenez · Lisa Fan · Gautam Bhattacharya · Tzeviya S Fuchs · David Gifford · Loren Lugosch · Orhan Firat · Benjamin Baer · JAHANGIR ALAM · Jamin Shin · Mirco Ravanelli · Paul Smolensky · Zining Zhu · Hamid Eghbal-zadeh · Skyler Seto · Imran Sheikh · Joao Felipe Santos · Yonatan Belinkov · Nadir Durrani · Oiwi Parker Jones · Shuai Tang · André Merboldt · Titouan Parcollet · Wei-Ning Hsu · Krishna Pillutla · Ehsan Hosseini-Asl · Monica Dinculescu · Alexander Amini · Ying Zhang · Taoli Cheng · Alain Tapp -
2018 : Posters and Open Discussions (see below for poster titles) »
Ramya Malur Srinivasan · Miguel Perez · Yuanyuan Liu · Ben Wood · Dan Philps · Kyle Brown · Daniel Martin · Mykola Pechenizkiy · Luca Costabello · Rongguang Wang · Suproteem Sarkar · Sangwoong Yoon · Zhuoran Xiong · Enguerrand Horel · Zhu (Drew) Zhang · Ulf Johansson · Jonathan Kochems · Gregory Sidier · Prashant Reddy · Lana Cuthbertson · Yvonne Wambui · Christelle Marfaing · Galen Harrison · Irene Unceta Mendieta · Thomas Kehler · Mark Weber · Li Ling · Ceena Modarres · Abhinav Dhall · Arash Nourian · David Byrd · Ajay Chander · Xiao-Yang Liu · Hongyang Yang · Shuang (Sophie) Zhai · Freddy Lecue · Sirui Yao · Rory McGrath · Artur Garcez · Vangelis Bacoyannis · Alexandre Garcia · Lukas Gonon · Mark Ibrahim · Melissa Louie · Omid Ardakanian · Cecilia Sönströd · Kojin Oshiba · Chaofan Chen · Suchen Jin · aldo pareja · Toyo Suzumura -
2018 : Modeling Spatiotemporal Multimodal Language with Recurrent Multistage Fusion »
Paul Pu Liang -
2017 Spotlight: Tensor encoding and decomposition of brain connectomes with application to tractography evaluation »
Cesar F Caiafa · Olaf Sporns · Andrew Saykin · Franco Pestilli -
2017 Poster: Unified representation of tractography and diffusion-weighted MRI data using sparse multidimensional arrays »
Cesar F Caiafa · Olaf Sporns · Andrew Saykin · Franco Pestilli -
2011 Poster: A Multilinear Subspace Regression Method Using Orthogonal Tensors Decompositions »
Qibin Zhao · Cesar F Caiafa · Danilo Mandic · Liqing Zhang · Tonio Ball · Andreas Schulze-bonhage · Andrzej S CICHOCKI -
2011 Spotlight: A Multilinear Subspace Regression Method Using Orthogonal Tensors Decompositions »
Qibin Zhao · Cesar F Caiafa · Danilo Mandic · Liqing Zhang · Tonio Ball · Andreas Schulze-bonhage · Andrzej S CICHOCKI