arrow
Volume 32, Issue 4
Generalization Error Analysis of Neural Networks with Gradient Based Regularization

Lingfeng Li, Xue-Cheng Tai & Jiang Yang

Commun. Comput. Phys., 32 (2022), pp. 1007-1038.

Published online: 2022-10

Export citation
  • Abstract

In this work, we study gradient-based regularization methods for neural networks. We mainly focus on two regularization methods: the total variation and the Tikhonov regularization. Adding the regularization term to the training loss is equivalent to using neural networks to solve some variational problems, mostly in high dimensions in practical applications. We introduce a general framework to analyze the error between neural network solutions and true solutions to variational problems. The error consists of three parts: the approximation errors of neural networks, the quadrature errors of numerical integration, and the optimization error. We also apply the proposed framework to two-layer networks to derive a priori error estimate when the true solution belongs to the so-called Barron space. Moreover, we conduct some numerical experiments to show that neural networks can solve corresponding variational problems sufficiently well. The networks with gradient-based regularization are much more robust in image applications.

  • AMS Subject Headings

68T07

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{CiCP-32-1007, author = {Li , LingfengTai , Xue-Cheng and Yang , Jiang}, title = {Generalization Error Analysis of Neural Networks with Gradient Based Regularization}, journal = {Communications in Computational Physics}, year = {2022}, volume = {32}, number = {4}, pages = {1007--1038}, abstract = {

In this work, we study gradient-based regularization methods for neural networks. We mainly focus on two regularization methods: the total variation and the Tikhonov regularization. Adding the regularization term to the training loss is equivalent to using neural networks to solve some variational problems, mostly in high dimensions in practical applications. We introduce a general framework to analyze the error between neural network solutions and true solutions to variational problems. The error consists of three parts: the approximation errors of neural networks, the quadrature errors of numerical integration, and the optimization error. We also apply the proposed framework to two-layer networks to derive a priori error estimate when the true solution belongs to the so-called Barron space. Moreover, we conduct some numerical experiments to show that neural networks can solve corresponding variational problems sufficiently well. The networks with gradient-based regularization are much more robust in image applications.

}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2021-0211}, url = {http://global-sci.org/intro/article_detail/cicp/21137.html} }
TY - JOUR T1 - Generalization Error Analysis of Neural Networks with Gradient Based Regularization AU - Li , Lingfeng AU - Tai , Xue-Cheng AU - Yang , Jiang JO - Communications in Computational Physics VL - 4 SP - 1007 EP - 1038 PY - 2022 DA - 2022/10 SN - 32 DO - http://doi.org/10.4208/cicp.OA-2021-0211 UR - https://global-sci.org/intro/article_detail/cicp/21137.html KW - Machine learning, regularization, generalization error, image classification. AB -

In this work, we study gradient-based regularization methods for neural networks. We mainly focus on two regularization methods: the total variation and the Tikhonov regularization. Adding the regularization term to the training loss is equivalent to using neural networks to solve some variational problems, mostly in high dimensions in practical applications. We introduce a general framework to analyze the error between neural network solutions and true solutions to variational problems. The error consists of three parts: the approximation errors of neural networks, the quadrature errors of numerical integration, and the optimization error. We also apply the proposed framework to two-layer networks to derive a priori error estimate when the true solution belongs to the so-called Barron space. Moreover, we conduct some numerical experiments to show that neural networks can solve corresponding variational problems sufficiently well. The networks with gradient-based regularization are much more robust in image applications.

Lingfeng Li, Xue-Cheng Tai & Jiang Yang. (2022). Generalization Error Analysis of Neural Networks with Gradient Based Regularization. Communications in Computational Physics. 32 (4). 1007-1038. doi:10.4208/cicp.OA-2021-0211
Copy to clipboard
The citation has been copied to your clipboard