Workshop
Modern Machine Learning and Natural Language Processing
Ankur P Parikh · Avneesh Saluja · Chris Dyer · Eric Xing
Level 5; room 510 c
Fri 12 Dec, 5:30 a.m. PST
The structure, complexity, and sheer diversity and variety of human language makes Natural Language Processing (NLP) distinct from other areas of AI. Certain core NLP problems have traditionally been an inspiration for machine learning (ML) solutions e.g., sequence tagging, syntactic parsing, and language modeling, primarily because these tasks can be easily abstracted into machine learning formulations (e.g., structured prediction, dimensionality reduction, or simpler regression and classification techniques). In turn, these formulations have facil-
itated the transfer of ideas such as (but not limited to) discriminative methods, Bayesian nonparametrics, neural networks, and low rank / spectral techniques into NLP. Problems in NLP are particularly appealing to those doing core ML research
due to the high-dimensional nature of the spaces involved (both the data and the label spaces) and the need to handle noise robustly, while principled, well-understood ML techniques are attractive to those in NLP since they potentially offer a solution to ill-behaved heuristics and training-test domain mismatch due
to the lack of generalization ability these heuristics possess.
But there are many other areas within NLP where the ML community is less involved, such as semantics, discourse and pragmatics analysis, summarization, and parts of machine translation, and that continue to rely on linguistically-
motivated but imprecise heuristics which may benefit from new machine learning approaches. Similarly, there are paradigms in ML, statistics, and optimization ranging from sub-modularity to bandit theory to Hilbert space embeddings that have not been well explored in the context of NLP.
The goal of this workshop is to bring together both applied and theoretical researchers in natural language processing and machine learning to facilitate the discussion of new frameworks that can help advance modern NLP. Some key questions we will address include (but not limited to):
- How can ML help provide novel representations and models to capture the structure of natural language?
- What NLP problems could benefit from new inference/optimization techniques?
- How can we design new ML paradigms to address the lack of annotated data in complex structured prediction problems such as knowledge extraction and semantics?
- What technical challenges posed by multilinguality, lexical
variation in social media, and nonstandard dialects are under-researched in ML?
- Does ML offer more principled ways for dealing with "overfitting" resulting from repeated evaluation on the same benchmark datasets?
- How can we tackle "scalability bottlenecks" unique to natural language?
Interest amongst both communities is high, as evidenced by a previous joint ACL-ICML symposium (2011) and a joint NAACL-ICML symposium (2013), and we hope to continue the exploration of topics beneficial to both fields that these symposiums initiated.
Live content is unavailable. Log in and register to view live content