Implementation of machine learning techniques to denoise and unmix TEM spectroscopic dataset

Rapid advancement in Transmission Electron Microscopy (TEM) instrumentation has led to better acquisition of high-resolution, nanoscale images, allowing material scientists to obtain in-depth analysis of material samples with complex designs. Concurrently, however, it has resulted in highly mixed da...

Full description

Saved in:
Bibliographic Details
Main Author: Quang, Uy Thinh
Other Authors: Martial Duchamp
Format: Final Year Project
Language:English
Published: 2018
Subjects:
Online Access:http://hdl.handle.net/10356/73745
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Rapid advancement in Transmission Electron Microscopy (TEM) instrumentation has led to better acquisition of high-resolution, nanoscale images, allowing material scientists to obtain in-depth analysis of material samples with complex designs. Concurrently, however, it has resulted in highly mixed dataset. In other words, each pixel of the imaged sample would be a combination of multiple signals from constituent elements and phases. Data separation, or unmixing, of this mixed image, would be required for tasks including quantification and identification. This project involves two computational algorithms developed for such a purpose: Vertex Component Analysis (VCA) and Bayesian Linear Unmixing (BLU). The project first focused on the implementation of these algorithms into HyperSpy, an open source analytical imaging toolbox developed in Python language. The new code scripts for both techniques were designed independently, and incorporated into existing software scripts such that they could fully utilize the functionalities available in HyperSpy. The implementation was confirmed to be operational via verification tests using sample EDX and EELS images, ensuring that the codes did not produce random unmixing outputs. The project’s second phase studied dataset pre-treatment techniques using highly noise-corrupted EDX images, and compared the unmixing performance between BLU and VCA. The images employed were that of a methylammonium lead iodide (MAPbI3) perovskite film, and In(Zn)P/ZnS core-shell nanocrystal. A permutated combination of three pre-treatment methods, namely binning, cropping and normalization were applied on the images. Binning was used to boost signals through reducing the image resolution while cropping targeted the region of interest in an image to avoid irrelevant signals. Normalization, finally, dealt with the shot-noise nature of EDX images. It was found that a concurrent combination of the three methods produced the optimal unmixing outputs for BLU and VCA. Furthermore, a synthetic dataset was created via HyperSpy to test the Signal-to-Noise Ratio (SNR) dependence of BLU and VCA. Interestingly, BLU had a larger margin of unmixing error compared to VCA, but in heavily-noise corrupted conditions, BLU performed marginally better. Overall, however, it appeared that VCA excelled with lighter resource demand, faster processing time and reasonably accurate unmixing output.