arrow
Volume 16, Issue 1
Sparse Deep Neural Network for Nonlinear Partial Differential Equations

Yuesheng Xu & Taishan Zeng

Numer. Math. Theor. Meth. Appl., 16 (2023), pp. 58-78.

Published online: 2023-01

Export citation
  • Abstract

More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications. Data that we encounter often have certain embedded sparsity structures. That is, if they are represented in an appropriate basis, their energies can concentrate on a small number of basis functions. This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities, by deep neural networks (DNNs) with a sparse regularization with multiple parameters. Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions, by employing a penalty with multiple parameters, we develop DNNs with a multi-scale sparse regularization (SDNN) for effectively representing functions having certain singularities. We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation. Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.

  • AMS Subject Headings

68T07, 65D15, 65F50, 47A52, 65N50

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{NMTMA-16-58, author = {Xu , Yuesheng and Zeng , Taishan}, title = {Sparse Deep Neural Network for Nonlinear Partial Differential Equations}, journal = {Numerical Mathematics: Theory, Methods and Applications}, year = {2023}, volume = {16}, number = {1}, pages = {58--78}, abstract = {

More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications. Data that we encounter often have certain embedded sparsity structures. That is, if they are represented in an appropriate basis, their energies can concentrate on a small number of basis functions. This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities, by deep neural networks (DNNs) with a sparse regularization with multiple parameters. Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions, by employing a penalty with multiple parameters, we develop DNNs with a multi-scale sparse regularization (SDNN) for effectively representing functions having certain singularities. We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation. Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.

}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.OA-2022-0104}, url = {http://global-sci.org/intro/article_detail/nmtma/21343.html} }
TY - JOUR T1 - Sparse Deep Neural Network for Nonlinear Partial Differential Equations AU - Xu , Yuesheng AU - Zeng , Taishan JO - Numerical Mathematics: Theory, Methods and Applications VL - 1 SP - 58 EP - 78 PY - 2023 DA - 2023/01 SN - 16 DO - http://doi.org/10.4208/nmtma.OA-2022-0104 UR - https://global-sci.org/intro/article_detail/nmtma/21343.html KW - Sparse approximation, deep learning, nonlinear partial differential equations, sparse regularization, adaptive approximation. AB -

More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications. Data that we encounter often have certain embedded sparsity structures. That is, if they are represented in an appropriate basis, their energies can concentrate on a small number of basis functions. This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities, by deep neural networks (DNNs) with a sparse regularization with multiple parameters. Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions, by employing a penalty with multiple parameters, we develop DNNs with a multi-scale sparse regularization (SDNN) for effectively representing functions having certain singularities. We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation. Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.

Yuesheng Xu & Taishan Zeng. (2023). Sparse Deep Neural Network for Nonlinear Partial Differential Equations. Numerical Mathematics: Theory, Methods and Applications. 16 (1). 58-78. doi:10.4208/nmtma.OA-2022-0104
Copy to clipboard
The citation has been copied to your clipboard