Demo: Building Maternal Health LLMs for Low-Resource Settings
Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains, including healthcare applications. However, developing and deploying these models typically requires substantial computational resources and large training datasets, creating significant barriers for low-resource languages. This paper presents a tailored pipeline for designing and serving low-resource LLMs in maternal health. We introduce two key contributions: a model design and adaptation method optimized for healthcare applications inlow-resource settings, and a model deployment and serving pipeline, featuring an automated auditor framework for continuous quality assessment of model responses in production. Our approach is validated through UlizaMama, a deployed LLaMA3-based LLM serving over 12,000 daily maternal health queries in Kenya.