Basit öğe kaydını göster

dc.contributor.authorTatar, Güner
dc.contributor.authorBayar, Salih
dc.date.accessioned2023-08-18T08:26:30Z
dc.date.available2023-08-18T08:26:30Z
dc.date.issued2023en_US
dc.identifier.citationTATAR, Güner, & Salih BAYAR. "Real-Time Multi-Task ADAS Implementation on Reconfigurable Heterogeneous MPSoC Architecture". IEEE Access, 4 (2023): 1-21.en_US
dc.identifier.urihttps://ieeexplore.ieee.org/document/10198234
dc.identifier.urihttps://hdl.handle.net/11352/4640
dc.description.abstractThe rapid adoption of Advanced Driver Assistance Systems (ADAS) in modern vehicles, aiming to elevate driving safety and experience, necessitates the real-time processing of high-definition video data. This requirement brings about considerable computational complexity and memory demands, highlighting a critical research void for a design integrating high FPS throughput with optimal Mean Average Precision (mAP) and Mean Intersection over Union (mIoU). Performance improvement at lower costs, multi-tasking ability on a single hardware platform, and flawless incorporation into memoryconstrained devices are also essential for boosting ADAS performance. Addressing these challenges, this study proposes an ADAS multi-task learning hardware-software co-design approach underpinned by the Kria KV260 Multi-Processor System-on-Chip Field Programmable Gate Array (MPSoC-FPGA) platform. The approach facilitates efficient real-time execution of deep learning algorithms specific to ADAS applications. Utilizing the BDD100K, KITTI, and CityScapes datasets, our ADAS multi-task learning system endeavours to provide accurate and efficient multi-object detection, segmentation, and lane and drivable area detection in road images. The system deploys a segmentation-based object detection strategy, using a ResNet-18 backbone encoder and a Single Shot Detector architecture, coupled with quantizationaware training to augment inference performance without compromising accuracy. The ADAS multi-task learning offers customization options for various ADAS applications and can be further optimized for increased precision and reduced memory usage. Experimental results showcase the system’s capability to perform real-time multi-class object detection, segmentation, line detection, and drivable area detection on road images at approximately 25.4 FPS using a 1920x1080p Full HD camera. Impressively, the quantized model has demonstrated a 51% mAP for object detection, 56.62% mIoU for image segmentation, 43.86% mIoU for line detection, and 81.56% IoU for drivable area identification, reinforcing its high efficacy and precision. The findings underscore that the proposed ADAS multi-task learning system is a practical, reliable, and effective solution for real-world applications.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/ACCESS.2023.3300379en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectADASen_US
dc.subjectDeep learningen_US
dc.subjectDeep processing uniten_US
dc.subjectMemory allocationen_US
dc.subjectMulti-task learningen_US
dc.subjectMPSoC-FPGA architectureen_US
dc.subjectVitis-AIen_US
dc.subjectQuantization aware trainingen_US
dc.titleReal-Time Multi-Task ADAS Implementation on Reconfigurable Heterogeneous MPSoC Architectureen_US
dc.typearticleen_US
dc.relation.journalIEEE Accessen_US
dc.contributor.departmentFSM Vakıf Üniversitesi, Mühendislik Fakültesi, Elektrik-Elektronik Mühendisliği Bölümüen_US
dc.identifier.volume4en_US
dc.identifier.startpage1en_US
dc.identifier.endpage21en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.contributor.institutionauthorTatar, Güner
dc.contributor.institutionauthorBayar, Salih


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster