Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Offline Reinforcement Learning Workshop: Offline RL as a "Launchpad"

Collaborative symmetricity exploitation for offline learning of hardware design solver

HAEYEON KIM · Minsu Kim · joungho kim · Jinkyoo Park


Abstract:

This paper proposes \textit{collaborative symmetricity exploitation} (\ourmethod{}) framework to train a solver for the decoupling capacitor placement problem (DPP), one of the significant hardware design problems. Due to the sequentially coupled multi-level property of the hardware design process, the design condition of DPP changes depending on the design of higher-level problems. Also, the online evaluation of real-world electrical performance through simulation is extremely costly. Thus, we propose the \ourmethod{} framework that allows data-efficient offline learning of a DPP solver (i.e., contextualized policy) with high generalization capability over changing task conditions. Leveraging the symmetricity for offline learning of hardware design solver increases data-efficiency by reducing the solution space and improves generalization capability by capturing the invariant nature present regardless of changing conditions. Extensive experiments verified that \ourmethod{} with zero-shot inference outperforms the neural baselines and iterative conventional design methods on the DPP benchmark. Furthermore, \ourmethod{} greatly outperformed the expert method used to generate the offline data for training.

Chat is not available.