[ Hall H ]
There has recently been widespread discussion of whether GPT-3, LaMDA 2, and related large language models might be sentient. Should we take this idea seriously? I will discuss the underlying issue and will break down the strongest reasons for and against.
[ Hall H ]
Conformal inference methods are becoming all the rage in academia and industry alike. In a nutshell, these methods deliver exact prediction intervals for future observations without making any distributional assumption whatsoever other than having iid, and more generally, exchangeable data. This talk will review the basic principles underlying conformal inference and survey some major contributions that have occurred in the last 2-3 years or. We will discuss enhanced conformity scores applicable to quantitative as well as categorical labels. We will also survey novel methods which deal with situations, where the distribution of observations can shift drastically — think of finance or economics where market behavior can change over time in response to new legislation or major world events, or public health where changes occur because of geography and/or policies. All along, we shall illustrate the methods with examples including the prediction of election results or COVID19-case trajectories.
[ Hall H ]
Remarkable model performance makes news headlines and compelling demos, but these advances rarely translate to a lasting impact on real-world users. A common anti-pattern is overlooking the dynamic, complex, and unexpected ways humans interact with AI, which in turn limits the adoption and usage of AI in practical contexts. To address this, I argue that human-AI interaction should be considered a first-class object in designing AI applications.
In this talk, I present a few novel interactive systems that use AI to support complex real-life tasks. I discuss tensions and solutions in designing human-AI interaction, and critically reflect on my own research to share hard-earned design lessons. Factors such as user motivation, coordination between stakeholders, social dynamics, and user’s and AI’s adaptivity to each other often play a crucial role in determining the user experience of AI, even more so than model accuracy. My call to action is that we need to establish robust building blocks for “Interaction-Centric AI”—a systematic approach to designing and engineering human-AI interaction that complements and overcomes the limitations of model- and data-centric views.
[ Hall H ]
Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.
These outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients. These tools now drive important decisions across sectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone.
[ Hall H ]
NeurIPS has been in existence for more than 3 decades, each one marked by a dominant trend. The pioneering years saw the burgeoning of back-prop nets, the coming-of-age years blossomed with convex optimization, regularization, Bayesian methods, boosting, kernel methods, to name a few, and the junior years have been dominated by deep nets and big data. And now, recent analyses conclude that using ever bigger data and deeper networks is not a sustainable way of progressing. Meanwhile, other indicators show that Machine Learning is increasingly reliant upon good data and benchmarks, not only to train more powerful and/or more compact models, but also to soundly evaluate new ideas and to stress test models on their reliability, fairness, and protection against various attacks, including privacy attacks.
Simultaneously, in 2021, the NeurIPS Dataset and Benchmark track was launched and the Data-Centric AI initiative was born. This kickstarted the "data-centric era". It is gaining momentum in response to the new needs of data scientists who, admittedly, spend more time on understanding problems, designing experimental settings, and engineering datasets, than on designing and training ML models.
We will retrace the enormous collective efforts made by our community since the 1980's to share datasets and …
[ Hall H ]
I will describe a training algorithm for deep neural networks that does not require the neurons to propagate derivatives or remember neural activities. The algorithm can learn multi-level representations of streaming sensory data on the fly without interrupting the processing of the input stream. The algorithm scales much better than reinforcement learning and would be much easier to implement in cortex than backpropagation.