Skip to yearly menu bar Skip to main content


Invited Talk

Social Intelligence

Blaise Aguera y Arcas

West Exhibition Hall C, B3

Abstract:

In the past decade, we’ve figured out how to build artificial neural nets that can achieve superhuman performance at almost any task for which we can define a loss function and gather or create a sufficiently large dataset. While this is unlocking a wealth of valuable applications, it has not created anything resembling a “who”, raising interesting new (and, sometimes, old) perspectives on what we really mean when we refer to “general intelligence” in big-brained animals, including ourselves. Public scrutiny has also intensified regarding a host of seemingly unrelated concerns: how can we make fair and ethical models? How can we have privacy in a world where our data are the fuel for training all of these models? Does AI at scale increase or curtail human agency? Will AI help or harm the planet ecologically, given the exponentially increasing computational loads we’ve started to see? Do we face a real risk of runaway AI without human value alignment? This talk will be technically grounded, but will also address these big questions and some non-obvious interconnections between them. We will begin with privacy and agency in today’s ML landscape, noting how new technologies for efficient on-device inference and federated computation offer ways to scale beneficial applications without incurring many of the downsides of current mainstream methods. We will then delve deeper into the limitations of the optimization framework for ML, and explore alternative approaches involving meta-learning, evolution strategies, populations, sociality, and cultural accumulation. We hypothesize that this relatively underexplored approach to general intelligence may be both fruitful in the near term and more optimistic in its long-term outlook.

Chat is not available.