Dynamic Weight Strategy of Physics-Informed Neural Networks for the 2D Navier-Stokes Equations

Entropy (Basel). 2022 Sep 6;24(9):1254. doi: 10.3390/e24091254.

Abstract

When PINNs solve the Navier-Stokes equations, the loss function has a gradient imbalance problem during training. It is one of the reasons why the efficiency of PINNs is limited. This paper proposes a novel method of adaptively adjusting the weights of loss terms, which can balance the gradients of each loss term during training. The weight is updated by the idea of the minmax algorithm. The neural network identifies which types of training data are harder to train and forces it to focus on those data before training the next step. Specifically, it adjusts the weight of the data that are difficult to train to maximize the objective function. On this basis, one can adjust the network parameters to minimize the objective function and do this alternately until the objective function converges. We demonstrate that the dynamic weights are monotonically non-decreasing and convergent during training. This method can not only accelerate the convergence of the loss, but also reduce the generalization error, and the computational efficiency outperformed other state-of-the-art PINNs algorithms. The validity of the method is verified by solving the forward and inverse problems of the Navier-Stokes equation.

Keywords: Navier–Stokes equations; dynamic weight strategy; physics-informed neural networks.

Grants and funding

This work is in parts supported by the Research Fund from Key Laboratory of Xinjiang Province (No. 2020D04002).