LFM2: Capable and efficient on-device multimodal foundation models.
Jimmy Smith
Abstract
While recent multimodal foundation model families emphasize efficiency at scale, there remains a gap for edge-first models that simultaneously lead in quality, speed, and memory efficiency on phones, tablets, and laptops, while remaining practical to pre-train and post-train. We present LFM2, the second generation of Liquid Foundation Models optimized end-to-end for on-device deployment. We will explore LFM2's edge-first design: it co-designs architecture, pre-training and post-training for optimizing quality subject to on-device latency and peak memory constraints.
Successful Page Load