TY - GEN
T1 - A Masked Autoencoder -Based Approach for Defect Classification in Semiconductor Manufacturing
AU - Lu, Hu
AU - Shen, Jiwei
AU - Zhao, Botong
AU - Lou, Pengjie
AU - Zhou, Wenzhan
AU - Zhou, Kan
AU - Zhao, Xintong
AU - Lyu, Shujing
AU - Lu, Yue
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - In semiconductor manufacturing, automatic defect classification is of paramount importance. Even the slightest defects can compromise chip performance or lead to complete failure, subsequently impacting chip yield rates. Currently, defect classification still heavily relies on manual processes, often leading to a significant number of misclassifications. In this paper, we propose a method based on MAE (Masked Autoencoder) for automatic defect classification in chip manufacturing. The core concept of MAE involves applying a high-proportion random mask to images, creating a challenging image reconstruction task. Using the unmasked image patches, the model predicts the masked patches for self-supervised pretraining. When applied to downstream tasks, this methodology enhances the model's generalization and feature representation capabilities. In a task-agnostic way, we conduct self-supervised pretraining on a large number of SEM (Scanning Electron Microscope) images without the necessity of any labels. In a task-specific way, we fine-tune the network using a limited amount of highly reliable labels. Experimental results suggest that our method is capable of accurately classifying defects with minimal labeled data, greatly reducing labor costs.
AB - In semiconductor manufacturing, automatic defect classification is of paramount importance. Even the slightest defects can compromise chip performance or lead to complete failure, subsequently impacting chip yield rates. Currently, defect classification still heavily relies on manual processes, often leading to a significant number of misclassifications. In this paper, we propose a method based on MAE (Masked Autoencoder) for automatic defect classification in chip manufacturing. The core concept of MAE involves applying a high-proportion random mask to images, creating a challenging image reconstruction task. Using the unmasked image patches, the model predicts the masked patches for self-supervised pretraining. When applied to downstream tasks, this methodology enhances the model's generalization and feature representation capabilities. In a task-agnostic way, we conduct self-supervised pretraining on a large number of SEM (Scanning Electron Microscope) images without the necessity of any labels. In a task-specific way, we fine-tune the network using a limited amount of highly reliable labels. Experimental results suggest that our method is capable of accurately classifying defects with minimal labeled data, greatly reducing labor costs.
KW - Defect Classification
KW - Masked Autoencoder
KW - Self-supervised Pretraining
KW - Semiconductor Manufacturing
UR - https://www.scopus.com/pages/publications/85183053907
U2 - 10.1109/IWAPS60466.2023.10366134
DO - 10.1109/IWAPS60466.2023.10366134
M3 - 会议稿件
AN - SCOPUS:85183053907
T3 - IWAPS 2023 - 2023 7th International Workshop on Advanced Patterning Solutions
BT - IWAPS 2023 - 2023 7th International Workshop on Advanced Patterning Solutions
A2 - Wei, Yayi
A2 - Ye, Tianchun
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th International Workshop on Advanced Patterning Solutions, IWAPS 2023
Y2 - 26 October 2023 through 27 October 2023
ER -