Temporal phase unwrapping using deep learning
The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection techniques, has the ability to eliminate the phase ambiguities even while measuring spatially isolated scenes or the objects with discontinuous surfaces. For the simplest an...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/146221 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-146221 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1462212023-03-04T17:12:44Z Temporal phase unwrapping using deep learning Yin, Wei Chen, Qian Feng, Shijie Tao, Tianyang Huang, Lei Trusiak, Maciej Asundi, Anand Krishna Zuo, Chao School of Mechanical and Aerospace Engineering Centre for Optical and Laser Engineering Engineering::Electrical and electronic engineering Imaging and Sensing Optical Sensors The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection techniques, has the ability to eliminate the phase ambiguities even while measuring spatially isolated scenes or the objects with discontinuous surfaces. For the simplest and most efficient case in MF-TPU, two groups of phase-shifting fringe patterns with different frequencies are used: the high-frequency one is applied for 3D reconstruction of the tested object and the unit-frequency one is used to assist phase unwrapping for the wrapped phase with high frequency. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that its absolute phase can be successfully recovered without any fringe order errors. However, due to the non-negligible noises and other error sources in actual measurement, the frequency of the high-frequency fringes is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. With recent developments and advancements of machine learning for computer vision and computational imaging, it can be demonstrated in this work that deep learning techniques can automatically realize TPU through supervised learning, as called deep learning-based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even under different types of error sources, e.g., intensity noise, low fringe modulation, projector nonlinearity, and motion artifacts. Furthermore, as far as we know, our method was demonstrated experimentally that the high-frequency phase with 64 periods can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU. These results highlight that challenging issues in optical metrology can be potentially overcome through machine learning, opening new avenues to design powerful and extremely accurate high-speed 3D imaging systems ubiquitous in nowadays science, industry, and multimedia. Published version 2021-02-02T08:23:55Z 2021-02-02T08:23:55Z 2019 Journal Article Yin, W., Chen, Q., Feng, S., Tao, T., Huang, L., Trusiak, M., . . . Zuo, C. (2019). Temporal phase unwrapping using deep learning. Scientific Reports, 9(1), 20175-. doi:10.1038/s41598-019-56222-3 2045-2322 0000-0002-9148-3401 0000-0002-5645-5173 https://hdl.handle.net/10356/146221 10.1038/s41598-019-56222-3 31882669 2-s2.0-85077304542 1 9 en Scientific Reports © 2019 The Author(s). This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering Imaging and Sensing Optical Sensors |
spellingShingle |
Engineering::Electrical and electronic engineering Imaging and Sensing Optical Sensors Yin, Wei Chen, Qian Feng, Shijie Tao, Tianyang Huang, Lei Trusiak, Maciej Asundi, Anand Krishna Zuo, Chao Temporal phase unwrapping using deep learning |
description |
The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection techniques, has the ability to eliminate the phase ambiguities even while measuring spatially isolated scenes or the objects with discontinuous surfaces. For the simplest and most efficient case in MF-TPU, two groups of phase-shifting fringe patterns with different frequencies are used: the high-frequency one is applied for 3D reconstruction of the tested object and the unit-frequency one is used to assist phase unwrapping for the wrapped phase with high frequency. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that its absolute phase can be successfully recovered without any fringe order errors. However, due to the non-negligible noises and other error sources in actual measurement, the frequency of the high-frequency fringes is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. With recent developments and advancements of machine learning for computer vision and computational imaging, it can be demonstrated in this work that deep learning techniques can automatically realize TPU through supervised learning, as called deep learning-based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even under different types of error sources, e.g., intensity noise, low fringe modulation, projector nonlinearity, and motion artifacts. Furthermore, as far as we know, our method was demonstrated experimentally that the high-frequency phase with 64 periods can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU. These results highlight that challenging issues in optical metrology can be potentially overcome through machine learning, opening new avenues to design powerful and extremely accurate high-speed 3D imaging systems ubiquitous in nowadays science, industry, and multimedia. |
author2 |
School of Mechanical and Aerospace Engineering |
author_facet |
School of Mechanical and Aerospace Engineering Yin, Wei Chen, Qian Feng, Shijie Tao, Tianyang Huang, Lei Trusiak, Maciej Asundi, Anand Krishna Zuo, Chao |
format |
Article |
author |
Yin, Wei Chen, Qian Feng, Shijie Tao, Tianyang Huang, Lei Trusiak, Maciej Asundi, Anand Krishna Zuo, Chao |
author_sort |
Yin, Wei |
title |
Temporal phase unwrapping using deep learning |
title_short |
Temporal phase unwrapping using deep learning |
title_full |
Temporal phase unwrapping using deep learning |
title_fullStr |
Temporal phase unwrapping using deep learning |
title_full_unstemmed |
Temporal phase unwrapping using deep learning |
title_sort |
temporal phase unwrapping using deep learning |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/146221 |
_version_ |
1759853915148386304 |