Timezone: »
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable. It requires rigorous simulations of optical and chemical models that are computationally expensive. Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks. However, the considerable accuracy drop still impedes its industrial adoption. Most importantly, the quality and quantity of the training dataset directly affect the model performance. To tackle this problem, we propose a litho-aware data augmentation (LADA) framework to resolve the dilemma of limited data and improving the machine learning model performance. First, we pretrain the neural networks for lithography modeling and a gradient-friendly StyleGAN2 generator. We then perform adversarial active sampling to generate informative and synthetic in-distribution mask designs. These synthetic mask images will augment the original limited training dataset used to finetune the lithography model for improved performance. Experimental results demonstrate that LADA can successfully exploits the neural network capacity by narrowing down the performance gap between the training and testing data instances.
Author Information
Mingjie Liu (NVIDIA Corporation)
Haoyu Yang (NVIDIA)
David Pan (University of Texas, Austin)
Brucek Khailany (NVIDIA)
Mark Ren (NVIDIA)
More from the Same Authors
-
2022 : HEAT: Hardware-Efficient Automatic Tensor Decomposition for Transformer Compression »
Jiaqi Gu · Ben Keller · Jean Kossaifi · Anima Anandkumar · Brucek Khailany · David Pan -
2023 Poster: Pre-RMSNorm and Pre-CRMSNorm Transformers: Equivalent and Efficient Pre-LN Transformers »
Zixuan Jiang · Jiaqi Gu · Hanqing Zhu · David Pan -
2023 Poster: LithoBench: Benchmarking AI Computational Lithography for Semiconductor Manufacturing »
Su Zheng · Haoyu Yang · Binwu Zhu · Bei Yu · Martin Wong -
2022 Spotlight: NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation »
Jiaqi Gu · Zhengqi Gao · Chenghao Feng · Hanqing Zhu · Ray Chen · Duane Boning · David Pan -
2022 : HEAT: Hardware-Efficient Automatic Tensor Decomposition for Transformer Compression »
Jiaqi Gu · Ben Keller · Jean Kossaifi · Anima Anandkumar · Brucek Khailany · David Pan -
2022 Poster: NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation »
Jiaqi Gu · Zhengqi Gao · Chenghao Feng · Hanqing Zhu · Ray Chen · Duane Boning · David Pan -
2021 : Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update »
Jiawei Zhao · Steve Dai · Rangha Venkatesan · Brian Zimmer · Mustafa Ali · Ming-Yu Liu · Brucek Khailany · · Anima Anandkumar -
2021 Poster: L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization »
Jiaqi Gu · Hanqing Zhu · Chenghao Feng · Zixuan Jiang · Ray Chen · David Pan -
2020 : NVCell: Generate Standard Cell Layout in Advanced Technology Nodes with Reinforcement Learning »
Mark Ren