Deep convolutional neural networks for manufactured IC image analysis
Image analysis for manufactured Integrated Circuits (IC) plays an important role in IC function verification, hardware security assurance, intellectual property protection, and etc. Circuit extraction is one of the most common and reliable approaches to manufactured IC image analysis. However, the a...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/78126 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Image analysis for manufactured Integrated Circuits (IC) plays an important role in IC function verification, hardware security assurance, intellectual property protection, and etc. Circuit extraction is one of the most common and reliable approaches to manufactured IC image analysis. However, the annotation of delayered IC images, which is a crucial step for circuit extraction, is getting infeasible with conventional manual methods due to the increasing complexity of modern VLSI designs. Thus, recent research efforts have been devoted to automating the IC image annotation process using image processing or machine learning techniques. In this final year project, we first developed a deep convolutional neural network based segmentation model (wptnet) for pixel-wise annotation of circuit components in the metal layer of our delayered IC images. Our proposed wptnet achieved mean intersection over union of 88.98% and mean pixel accuracy of 94.35% when applied to 880 testing images from IC metal layer (image dimension: 224 × 224 pixels). However, IC chips normally have more than one layer and images of different IC layers exhibit different image features. Therefore, the segmentation performance of our proposed model will be degraded if a model trained on one layer is applied to a different layer. For example, our wptnet trained on our source set of IC images mentioned above can only achieve mean intersection over union of 81.54% and mean pixel accuracy of 89.54% on our target set of IC images which are slightly different from our source set. Preparing another set of training data for model retraining on a different set of IC images to preserve the performance on a different layer is time-consuming and resource-demanding. To improve the efficiency, we further present a wptnetDA network which incorporates domain adaptation techniques to perform the segmentation of delayered images from different layers. Specifically, we adopted domain confusion with Maximum Mean Discrepancy (MMD). Our wptnetDA model can then achieve mean intersection over union of 88.51% and mean pixel accuracy of 95.74% on the target set of images without degrading the performance on the source set. |
---|