BRASS: Budget-Aware RAW Sensor Sampling for Edge Vision via Co-Design
Abstract
Most camera systems read every RAW pixel at uniform precision, wasting measurement budget on uninformative regions. We present BRASS, a budget-aware RAW sensing framework that treats sensed bits as a first-class resource. A tiny policy network makes per-patch decisions on (i) whether to read and (ii) the ADC bit-depth; a compact RAW backbone consumes the resulting sparse, mixed-precision tensor directly—without demosaicing. We train end-to-end with a budget-aware objective that exposes controllable accuracy–efficiency trade-offs. On Imagenette-RAW (160×160, RGGB), BRASS matches the accuracy of a small RGB baseline while using ≈0.47× the sensing-bit proxy and produces better-calibrated predictions (lower ECE, no temperature scaling). Synchronized A800 (batch=128) forward-pass timings show higher throughput, consistent with reduced sensed work (policy cost included; ISP/demosaic for RGB and host I/O excluded). Scope & limits: results are software-based; the sensing-bit proxy is not a full energy model; real deployment requires ROI readout and per-region ADC control, which are supported in modern CMOS sensors. BRASS illustrates Learning-to-Sense co-design by optimizing what to measure and with how many bits to measure it under explicit budgets. We will release training scripts, configs, and ONNX exports upon acceptance.