Expo Talk Panel
Ring-1T, Ring-linear and Ming-Flash-Omni: Scaling Knowledge-Enhanced Large Language Models for Reasoning and Efficiency
Han Peng · Yankun Ren · Liang Jiang · Richard Sikang Bian · JUN ZHOU
Upper Level Room 28A-E
The Ling 2.0 series represents a new generation of large language models designed around knowledge enhancement, reasoning efficiency, and scalable architecture innovation. Built upon trillion-scale sparse MoE foundations, Ling-1T achieves ~50B active parameters per token with FP8 mixed-precision pipelines and 1F1B interleaved scheduling, realizing over 40% training-throughput gains with negligible accuracy loss (<0.1%).x000D
This talk presents the technical journey behind Ling-mini, Ling-flash, and Ling-1T, focusing on (1) efficient large-scale training systems for trillion-parameter models; (2) the Ling Scaling Law and its implications for cross-domain reasoning; (3) hybrid attention and RL-based alignment strategies that enable both concise reasoning and long-context understanding; and (4) how these architectural and algorithmic advances empower industrial applications such as financial risk modeling and knowledge-grounded agents.x000D
We will conclude with open-sourced implementations (inclusionAI on Hugging Face and ModelScope) and future research directions toward trustworthy, efficient, and domain-enhanced LLMs.
Session 1 : Ring-1T: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Session 2: Ring-linear: An Efficient Hybrid Architecture for Long-Context Reasoning
Session 3: Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation
Live content is unavailable. Log in and register to view live content