Expo Talk Panel
Agentic AI/RL
Daniel Han · Davide Testuggine · Joe Spisak · Sanyam Bhutani
Upper Level Ballroom 20AB
The transition from static language models to agentic AI systems driven by reinforcement learning (RL) places environments at the center of research and deployment. Environments provide the substrate where agents act, learn, and are evaluated—ranging from lightweight simulators and synthetic tasks to rich multi-agent ecosystems and real-world interfaces. Building and scaling these environments requires specialized tools and systems: standardized hubs for discovery and sharing, interfaces for reproducibility, and infrastructure that connects environments seamlessly to trainers, inference engines, and evaluation pipelines.
This workshop will highlight the tools, environments, and system innovations enabling the next generation of agentic AI. Topics will include scalable RL environment frameworks, benchmarks for safety and robustness, high-performance simulators optimized for heterogeneous hardware, and environment–trainer integration at scale. We will also explore how environments interface with large-model post-training workflows, providing the data and feedback loops necessary for reward shaping, alignment, and deployment in production systems.
By convening researchers, environment developers, and systems engineers, the workshop will create a venue to examine how environments, tools, and infrastructure together shape the future of agentic AI.
Live content is unavailable. Log in and register to view live content