A GPU implementation of least-squares reverse time migration
Least-squares reverse time migration (LSRTM) is a seismic imaging method that can provide higher-resolution image of the subsurface structures compared to other methods. However, LSRTM is computationally expensive. To reduce the computational time of LSRTM, GPU can be utilized. This leads to the obj...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Published: |
2022
|
Subjects: | |
Online Access: | https://repository.li.mahidol.ac.th/handle/123456789/79020 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Mahidol University |
Summary: | Least-squares reverse time migration (LSRTM) is a seismic imaging method that can provide higher-resolution image of the subsurface structures compared to other methods. However, LSRTM is computationally expensive. To reduce the computational time of LSRTM, GPU can be utilized. This leads to the objective of this work which is to develop a GPU implementation of LSRTM. In this work, the two-dimensional first-order acoustic wave equations are solved using the second-order finite difference on a staggered grid and a perfectly matched layer is used as an absorbing boundary condition. The adjoint-state method is used to compute the gradient of the objective function. A linear conjugate gradient method is used to minimize the objective function. Both forward- and backward-propagation of wavefields using the finite-difference method are performed on a single GPU using the NVIDIA CUDA library. For a verification purpose, the GPU program of LSRTM was applied to a synthetic data set generated from the Marmousi model. Numerical results show that LSRTM can provide an image with a higher resolution of subsurface structure compared to a conventional RTM image. For a computational cost issue, the GPU-version of LSRTM is significantly faster than the serial CPU-version of LSRTM. |
---|