Positional Encoding: Past, Present, and Future
Abstract
Positional Encoding is a foundational yet often opaque component of Transformer architectures, underpinning how self-attention mechanisms capture sequence order in language, vision, and multimodal models. Despite its centrality to the success of modern LLMs, and other attention-reliant architectures, the mathematical intuition behind positional encoding remains challenging and inaccessible to many researchers and practitioners. This workshop aims to demystify positional encoding by bridging formal theory with intuitive understanding and practical experimentation. Through a series of guided lectures participants will explore the operational principles behind effective positional representations, the evolution of key methods (from sinusoidal and learned embeddings to rotary and relative encodings), and open challenges that motivate current research directions. We will also provide open-source code implementations, mathematical visualizations, and collaborative ideation sessions for fostering new positional encoding concepts. By easing the barrier to entry for this mathematically intensive, yet crucial topic, the workshop seeks to foster deeper understanding, interdisciplinary exchange, and novel contributions to the future of Positional Encoding, and Transformer design
Schedule
|
|
|
|
|
|
|
|
|
3:45 PM
|
|
|