DreamPRM: Domain-Reweighted Process Reward Model for Multimodal Reasoning
Qi Cao · Ruiyi Wang · Ruiyi Zhang · Sai Ashish Somayajula · Pengtao Xie
Abstract
Extending process reward models (PRMs) to multimodal LLMs is hindered by broad domain coverage, train–test distribution shift, and severe dataset quality imbalance. We propose DreamPRM, a bi-level, domain-reweighted framework: lower-level fine-tuning learns with per-domain weights to prioritize high-quality reasoning signals, while upper-level evaluation on a meta set updates these weights via an aggregation loss. Across diverse math reasoning benchmarks, DreamPRM consistently enhances state-of-the-art MLLMs and outperforms strong baselines in data selection and test-time scaling.
Chat is not available.
Successful Page Load