Volume 21, Issue 4
Simulation of Maxwell’s Equations on GPU Using a High-Order Error-Minimized Scheme

Tony W. H. Sheu ,  S. Z. Wang ,  J. H. Li and Matthew R. Smith


Commun. Comput. Phys., 21 (2017), pp. 1039-1064.

Preview Full PDF BiBTex 105 403
  • Abstract

In this study an explicit Finite Difference Method (FDM) based scheme is developed to solve the Maxwell’s equations in time domain for a lossless medium. This manuscript focuses on two unique aspects – the three dimensional time-accurate discretization of the hyperbolic system of Maxwell equations in three-point non-staggered grid stencil and it’s application to parallel computing through the use of Graphics Processing Units (GPU). The proposed temporal scheme is symplectic, thus permitting conservation of all Hamiltonians in the Maxwell equation. Moreover, to enable accurate predictions over large time frames, a phase velocity preserving scheme is developed for treatment of the spatial derivative terms. As a result, the chosen time increment and grid spacing can be optimally coupled. An additional theoretical investigation into this pairing is also shown. Finally, the application of the proposed scheme to parallel computing using one Nvidia K20 Tesla GPU card is demonstrated. For the benchmarks performed, the parallel speedup when compared to a single core of an Intel i7-4820K CPU is approximately 190x.

  • History

Published online: 2018-04

  • Keywords

  • AMS Subject Headings

  • Cited by