An efficient data generation method for ANN-based surrogate models

Ren Kai Tan, Chao Qian, Michael Wang, Wenjing Ye

Research output: Contribution to journalArticleResearchpeer-review

14 Citations (Scopus)

Abstract

Rapid development in deep learning methods has brought about wide applications of artificial-neural-network (ANN)-based surrogate models in the field of numerical simulation. Given sufficient training, ANN-based surrogate models can be both accurate and efficient, and thus can be used to replace computationally expensive numerical simulations in problems where intensive numerical simulations are required. However, the training of ANN-based surrogate models relies on a large corpus of ground-truth data that is often generated using full-scale numerical simulations. For such large-scale problems, the computational cost of generating training data can be massive which diminishes the efficiency gained using ANN-based surrogate models and limits their application scope. In this work, a solution scheme is proposed to address this issue by reducing the full-scale numerical simulations needed during training data generation, thus reducing the training cost. The key idea is to utilize a Mapping Network that maps a coarse field to a fine field to generate fine-scale training data. Compared to surrogate models which map parameters/structures to fine fields, Mapping Network is much easier to be trained and thus requires much less fine-scale data. In addition, it has much better transferability and can be easily adopted to a related but different task. Combined with transfer learning, the proposed scheme results in greatly reduced training costs compared to the approach without using Mapping Network.

Original languageEnglish
Article number90
Number of pages22
JournalStructural and Multidisciplinary Optimization
Volume65
Issue number3
DOIs
Publication statusPublished - 14 Feb 2022
Externally publishedYes

Keywords

  • Artificial Neural Network
  • Deep learning
  • Surrogate model
  • Topology optimization
  • Transfer learning

Cite this