Basit öğe kaydını göster

dc.contributor.authorKuş, Zeki
dc.date.accessioned2025-03-20T13:25:01Z
dc.date.available2025-03-20T13:25:01Z
dc.date.issued2024en_US
dc.identifier.citationKUŞ, Zeki. "UKnow-Net: Knowledge-Enhanced U-Net for Improved Retinal Vessel Segmentation". Gazi University Journal of Science Part A: Engineering and Innovation, 11.4 (2024): 742-758.en_US
dc.identifier.urihttps://dergipark.org.tr/en/pub/gujsa/issue/89260/1575986
dc.identifier.urihttps://hdl.handle.net/11352/5236
dc.description.abstractRetinal vessel segmentation plays a critical role in diagnosing and managing ophthalmic and systemic diseases, as abnormalities in retinal vasculature can indicate disease progression. Traditional manual segmentation by expert ophthalmologists is time-consuming, labor-intensive, and prone to variability, underscoring the need for automated methods. While deep learning approaches like U-Net have advanced retinal vessel segmentation, they often struggle to generalize across diverse datasets due to differences in image acquisition techniques, resolutions, and patient demographics. To address these challenges, I propose UKnow-Net, a knowledge-enhanced U-Net architecture designed to improve retinal vessel segmentation across multiple datasets. UKnow-Net employs a multi-step process involving knowledge distillation and enhancement techniques. First, I train four specialized teacher networks separately on four publicly available retinal vessel segmentation datasets—DRIVE, CHASE_DB1, DCA1, and CHUAC—allowing each to specialize in the unique features of its respective dataset. These teacher networks generate pseudo-labels representing their domain-specific knowledge. We then train a student network using the ensemble of pseudo-labels from all teacher networks, effectively distilling the collective expertise into a unified model capable of generalizing across different datasets. Experiments demonstrate that UKnow-Net outperforms traditional handcrafted networks (such as U-Net, UNet++, and Attention U-Net) and several state-of-the-art models in key performance metrics, including sensitivity, specificity, F1 score, and Intersection over Union (IoU). Specifically, our two variants, UKnowNet-A and UKnowNet-B, show well performance; UKnowNet-A, trained solely on pseudo-labels, achieved higher sensitivity across all datasets, indicating a superior ability to detect true positives, while UKnowNet-B, which combines pseudo-labels with ground truth annotations, achieved balanced precision and recall, leading to higher F1 scores and IoU metrics. The integration of pseudo-labels effectively transfers the collective expertise of the teacher networks to the student network, enhancing generalization and robustness. I aim to ensure fair comparison and reproducibility in future research by publicly sharing our source code and model weights.en_US
dc.language.isoengen_US
dc.publisherGazi Üniversitesien_US
dc.relation.isversionof10.54287/gujsa.1575986en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectRetinal Vessel Segmentationen_US
dc.subjectKnowledge Distillation and Enhancementen_US
dc.subjectSemi-supervised Learningen_US
dc.titleUKnow-Net: Knowledge-Enhanced U-Net for Improved Retinal Vessel Segmentationen_US
dc.typearticleen_US
dc.relation.journalGazi University Journal of Science Part A: Engineering and Innovationen_US
dc.contributor.departmentFSM Vakıf Üniversitesi, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.contributor.authorIDhttps://orcid.org/0000-0001-8762-7233en_US
dc.identifier.volume11en_US
dc.identifier.issue4en_US
dc.identifier.startpage742en_US
dc.identifier.endpage758en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.contributor.institutionauthorKuş, Zeki


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster