NIPS 2013
Skip to yearly menu bar Skip to main content


Workshop

Large Scale Matrix Analysis and Inference

Reza Zadeh · Gunnar Carlsson · Michael Mahoney · Manfred K. Warmuth · Wouter M Koolen · Nati Srebro · Satyen Kale · Malik Magdon-Ismail · Ashish Goel · Matei A Zaharia · David Woodruff · Ioannis Koutis · Benjamin Recht

Harvey's Tallac

Much of Machine Learning is based on Linear Algebra.
Often, the prediction is a function of a dot product between
the parameter vector and the feature vector. This essentially
assumes some kind of independence between the features.
In contrast matrix parameters can be used to learn interrelations
between features: The (i,j)th entry of the parameter matrix
represents how feature i is related to feature j.

This richer modeling has become very popular. In some applications,
like PCA and collaborative filtering, the explicit goal is inference
of a matrix parameter. Yet in others, like direction learning and
topic modeling, the matrix parameter instead pops up in the algorithms
as the natural tool to represent uncertainty.

The emergence of large matrices in many applications has
brought with it a slew of new algorithms and tools.
Over the past few years, matrix analysis and numerical linear
algebra on large matrices has become a thriving field.
Also manipulating such large matrices makes it necessary to
to think about computer systems issues.

This workshop aims to bring closer researchers in large
scale machine learning and large scale numerical linear
algebra to foster cross-talk between the two fields. The
goal is to encourage machine learning researchers to work
on numerical linear algebra problems, to inform machine
learning researchers about new developments on large scale
matrix analysis, and to identify unique challenges and
opportunities. The workshop will conclude with a
session of contributed posters.

http://largematrix.org

Live content is unavailable. Log in and register to view live content