NIPS 2007
Skip to yearly menu bar Skip to main content


Workshop

Efficient Machine Learning - Overcoming Computational Bottlenecks in Machine Learning (Part 2)

Samy Bengio · Corinna Cortes · Dennis DeCoste · Francois Fleuret · Ramesh Natarajan · Edwin Pednault · Dan Pelleg · Elad Yom-Tov

Hilton: Mt. Currie S

The ever increasing size of available data to be processed by machine learning algorithms has yielded several approaches, from online algorithms to parallel and distributed computing on multi-node clusters. Nevertheless, it is not clear how modern machine learning approaches can either cope with such parallel machineries or take into account strong constraints regarding the available time to handle training and/or test examples. This workshop will explore two alternatives: (1) modern machine learning approaches that can handle real time processing at train and/or at test time, under strict computational constraints (when the flow of incoming data is continuous and needs to be handled), and (2) modern machine learning approaches that can take advantage of new commodity hardware such as multicore, GPUs, and fast networks. This two-day workshop aims to set the agenda for future advancements by fostering a discussion of new ideas and methods and by demonstrating the potential uses of readily-available solutions. It will bring together both researchers and practitioners to offer their views and experience in applying machine learning to large scale learning.

Live content is unavailable. Log in and register to view live content