Timezone: »
Artificial electromagnetic materials (AEMs), including metamaterials, derive their electromagnetic properties from geometry rather than chemistry. With the appropriate geometric design, AEMs have achieved exotic properties not realizable with conventional materials (e.g., cloaking or negative refractive index). However, understanding the relationship between the AEM structure and its properties is often poorly understood. While computational electromagnetic simulation (CEMS) may help design new AEMs, its use is limited due to its long computational time. Recently, it has been shown that deep learning can be an alternative solution to infer the relationship between an AEM geometry and its properties using a (relatively) small pool of CEMS data. However, the limited publicly released datasets and models and no widely-used benchmark for comparison have made using deep learning approaches even more difficult. Furthermore, configuring CEMS for a specific problem requires substantial expertise and time, making reproducibility challenging. Here, we develop a collection of three classes of AEM problems: metamaterials, nanophotonics, and color filter designs. We also publicly release software, allowing other researchers to conduct additional simulations for each system easily. Finally, we conduct experiments on our benchmark datasets with three recent neural network architectures: the multilayer perceptron (MLP), MLP-mixer, and transformer. We identify the methods and models that generalize best over the three problems to establish the best practice and baseline results upon which future research can build.
Author Information
Yang Deng (Duke University)
Juncheng Dong (Duke University)
Simiao Ren (Duke University)
Omar Khatib (Duke University)
Mohammadreza Soltani (Duke University)
Vahid Tarokh (Duke University)
Willie Padilla (Duke University)
Jordan Malof (Duke University)
More from the Same Authors
-
2022 Poster: GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations »
Enmao Diao · Jie Ding · Vahid Tarokh -
2022 Poster: Inference and Sampling for Archimax Copulas »
Yuting Ng · Ali Hasan · Vahid Tarokh -
2022 Poster: SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training »
Enmao Diao · Jie Ding · Vahid Tarokh -
2020 Poster: Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method »
Simiao Ren · Willie Padilla · Jordan Malof -
2019 Poster: Gradient Information for Representation and Modeling »
Jie Ding · Robert Calderbank · Vahid Tarokh -
2019 Poster: SpiderBoost and Momentum: Faster Variance Reduction Algorithms »
Zhe Wang · Kaiyi Ji · Yi Zhou · Yingbin Liang · Vahid Tarokh -
2018 Poster: Learning Bounds for Greedy Approximation with Explicit Feature Maps from Multiple Kernels »
Shahin Shahrampour · Vahid Tarokh