IMPROVING COMPUTATIONAL EFFICIENCY OF U-NET ARCHITECTURE FOR LUNG CANCER SEGMENTATION IN COMPUTED TOMOGRAPHY-SCANS VIA KNOWLEDGE DISTILLATION
Haidar Muhammad Zidan, Oskar Natan, S.ST., M.Tr.T., Ph.D.; Dr. Dyah Aruming Tyas, S.Si.
2025 | Skripsi | ELEKTRONIKA DAN INSTRUMENTASI
Segmentasi kanker paru-paru pada citra Computed Tomography (CT) merupakan langkah krusial dalam diagnosis dini, namun model deep learning seperti U-Net seringkali membutuhkan sumber daya komputasi yang besar, menghambat implementasi klinisnya. Penelitian ini bertujuan untuk meningkatkan efisiensi komputasi arsitektur U-Net untuk tugas segmentasi kanker paru-paru tanpa mengorbankan akurasi melalui penerapan metode knowledge distillation (KD). Dalam kerangka kerja teacher-student, model teacher yang kompleks (U-Net dengan encoder ResNet-152) digunakan untuk melatih model student yang lebih ringan (U-Net dengan encoder ResNet-18) dengan mentransfer dark knowledge. Dataset LIDC-IDRI digunakan untuk melatih dan mengevaluasi ketiga model: teacher, student baseline (tanpa KD), dan student terdistilasi. Hasil eksperimen menunjukkan bahwa model student yang dioptimalkan melalui KD tidak hanya mencapai efisiensi yang signifikan, tetapi juga secara mengejutkan melampaui performa model teacher. Model terdistilasi berhasil mengurangi jumlah parameter sebesar 86.7?n mempercepat waktu inferensi hingga hampir tiga kali lipat (1.62 s vs 4.63 s) dibandingkan model teacher. Lebih lanjut, model ini mencapai Dice Similarity Coefficient (DSC) sebesar 0.6782, lebih tinggi dari model teacher (0.6043) dan model student baseline (0.5546). Temuan ini membuktikan bahwa knowledge distillation berfungsi efektif tidak hanya sebagai teknik kompresi model, tetapi juga sebagai mekanisme regularisasi yang kuat, yang mampu meningkatkan kemampuan generalisasi model yang lebih kecil.
Lung cancer segmentation in computed tomography (CT) scans is a crucial step in early diagnosis, however deep learning models such as U-Net often require large computational resources, hindering their clinical implementation. This research aims to improve the computational efficiency of the U-Net architecture for lung cancer segmentation without sacrificing accuracy by applying the knowledge distillation (KD) method. Within the teacher-student framework, a complex teacher model (U-Net with ResNet-152 encoder) is used to train a lighter student model (U-Net with ResNet-18 encoder) by transferring dark knowledge. The LIDC-IDRI dataset is used to train and evaluate three models: teacher, baseline student (without KD), and distilled student. Experimental results show that the student model optimized through KD not only achieves significant efficiency but also surprisingly outperforms the teacher model. The distilled model successfully reduces the number of parameters by 86.7% and speeds up the inference time by almost three times (1.62 s vs. 4.63 s) compared to the teacher model. Furthermore, this model achieved a Dice Similarity Coefficient (DSC) of 0.6782, higher than the teacher model (0.6043) and the student baseline model (0.5546). These findings demonstrate that knowledge distillation functions effectively not only as a model compression technique but also as a regularization mechanism, capable of improving the generalization ability of smaller models.
Kata Kunci : Lung Cancer Segmentation, U-Net, Knowledge Distillation, Computational Efficiency, Deep Learning