Description: Researchers and practitioners alike are often faced with a large variety of choices in how they design, train, and optimize AI models. At early stages, experimentation may be valuable in understanding the behavior of novel algorithms, while in later stages, tuning may be required to achieve desired tradeoffs between evaluation metrics and resource utilization. Adaptive Experimentation techniques such as Bayesian optimization and active learning enable efficient experimentation using 10-100x less compute resources. In this tutorial, we will give an overview of state-of-the-art methods in adaptive experimentation, and through hands-on demonstration, show how these concepts can be applied to the optimization of PyTorch-based workflows via Ax, Meta’s open-source software platform for adaptive experimentation. In this tutorial, we will discuss how these tools can be used to characterize and optimize up to hundreds of hyperparameters, such as those found in neural network architectures, curricula and data augmentation, reinforcement learning algorithms, and configurations used in distributed training and serving. Concepts will be demonstrated via hands-on tutorials on resource-aware neural architecture search via multi-objective optimization and characterizing scaling laws with active learning. Ax has been successfully applied to a variety of product, infrastructure, ML, and research applications at Meta and the larger academic community. Learning objectives: - Hands-on Ax tutorials with time for Q&A - Conceptual understanding of the latest modeling and algorithmic advances that power Ax (e.g., Gaussian Process modeling, - Bayesian Optimization) - Discussion of the components of Ax and their purpose in the library - Understanding of the advanced offerings of the Ax platform - Leave feeling confident in applying Ax to your research!