[ West Exhibition Hall C + B3 ]
Abstract
A common model of AI suggests that there is a single measure of intelligence, often called AGI, and that AI systems are agents who can possess more or less of this intelligence. Cognitive science, in contrast, suggests that there are multiple forms of intelligence and that these intelligences trade-off against each other and have a distinctive developmental profile. The adult ability to accomplish goals and maximize utilities is often seen as the quintessential form of intelligence. However, this ability to exploit is in tension with the ability to explore. Children are particularly adept at exploration, though at the cost of competent action and decision-making. Human intelligence also relies heavily on cultural transmission, passing on information from one generation to the next, and children are also particularly adept at such learning.Thinking about exploration and transmission can change our approach to AI systems. Large language models and similar systems are best understood as cultural technologies, like writing, pictures and print, that enable information transmission. In contrast, our empirical work suggests that RL systems employing an intrinsic objective of empowerment gain can help capture the exploration we see in children.
[ West Exhibition Hall C + B3 ]
Abstract
Technological change typically occurs in three phases: basic research, scale-up, and industrial application, each with a different degree of methodological diversity—high, low, and medium, respectively. Historically, breakthroughs such as the steam engine and the Haber-Bosch process exemplify these phases and have had a profound impact on society. A similar pattern can be observed in the development of modern artificial intelligence (AI). In the scale-up phase of AI, large language models (LLMs) have emerged as the most prominent example. While LLMs can be seen as highly sophisticated knowledge representation techniques, they have not fundamentally advanced AI itself. The upscaling phase of AI was dominated by the transformer architecture. More recently, other architectures, such as state-space models and recurrent neural networks, have also been scaled up. For example, Long Short-Term Memory (LSTM) networks have been scaled up to xLSTM, which in many cases outperform transformers. We are now transitioning into the third phase: industrial AI. In this phase, we are adapting AI methods to real-world applications in robotics, life and earth sciences, engineering, or large-scale simulations that can be dramatically accelerated by AI methods. As we continue to develop these industrial AI methods, we expect to see an increase in methodological diversity, …
[ West Exhibition Hall C + B3 ]
Abstract
Anything is optimal given the right criteria: What are the optimal criteria as we invent the future of AI?
This talk explores this question with a series of stories including the development of affective computing,
inspired in part by how the human brain uses emotion to help signal what matters to a person.
One of these types of signals can be measured on the surface of the skin and has contributed to today’s
AI+wearable technology helping save lives. As artificial emotional intelligence abilities grow, what have
we learned about how to build optimal AI to engineer a future for people that is truly better?
Hint: It's unlikely to be achieved with scaling up today's models.
[ West Exhibition Hall C + B3 ]
Abstract
Humans learn though interaction and interact to learn. Automating highly dextreous tasks such as food handling, garment sorting, or assistive dressing relies on advances in mathematical modeling, perception, planning, control, to name a few. The advances in data-driven approaches, with the development of better simulation tools, allows for addressing these through systematic benchmarking of relevant methods. This can provide better understanding of what theoretical developments need to be made and how practical systems can be implemented and evaluated to provide flexible, scalable, and robust solutions. But are we solving the appropriate scientific problems and making the neccesarry step toward general solutions? This talk will showcase some of the challenges in developing physical interaction capabilities in robots, and overview our ongoing work on multimodal representation learning, latent space planning, learning physically-consistent reduced-order dynamics, visuomotor skill learning, and peak into our recent work on olfaction encoding.