Self-pretraining of 3D transformer variations with masked autoencoders for multiple instances in medical image analysis
Medical image analysis is a multi-discipline field of comprehensive medical imaging, mathematical modelling, artificial intelligence and other technologies. It has key processes such as digital image processing, feature analysis, evaluation and decision making. Traditional medical image analysis met...
Saved in:
Main Author: | Li, Linyuan |
---|---|
Other Authors: | Jiang Xudong |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173323 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Masked autoencoders for contrastive learning of heterogenous graphs
by: Srinthi Nachiyar D/O Thangamuthu
Published: (2024) -
Unsupervised anomaly detection in medical images with a memory-augmented multi-level cross-attentional masked autoencoder
by: TIAN, Yu, et al.
Published: (2023) -
Towards understanding why mask reconstruction pretraining helps in downstream tasks
by: PAN, Jiachun, et al.
Published: (2023) -
Few-shot contrastive transfer learning with pretrained model for masked face verification
by: Weng, Zhenyu, et al.
Published: (2024) -
A robust operators’ cognitive workload recognition method based on denoising masked autoencoder
by: Yu, Xiaoqing, et al.
Published: (2024)