dc.contributor.author | Tatar, Güner | |
dc.contributor.author | Bayar, Salih | |
dc.date.accessioned | 2023-08-18T08:26:30Z | |
dc.date.available | 2023-08-18T08:26:30Z | |
dc.date.issued | 2023 | en_US |
dc.identifier.citation | TATAR, Güner, & Salih BAYAR. "Real-Time Multi-Task ADAS Implementation on Reconfigurable Heterogeneous MPSoC Architecture". IEEE Access, 4 (2023): 1-21. | en_US |
dc.identifier.uri | https://ieeexplore.ieee.org/document/10198234 | |
dc.identifier.uri | https://hdl.handle.net/11352/4640 | |
dc.description.abstract | The rapid adoption of Advanced Driver Assistance Systems (ADAS) in modern vehicles,
aiming to elevate driving safety and experience, necessitates the real-time processing of high-definition
video data. This requirement brings about considerable computational complexity and memory demands,
highlighting a critical research void for a design integrating high FPS throughput with optimal Mean
Average Precision (mAP) and Mean Intersection over Union (mIoU). Performance improvement at lower
costs, multi-tasking ability on a single hardware platform, and flawless incorporation into memoryconstrained
devices are also essential for boosting ADAS performance. Addressing these challenges,
this study proposes an ADAS multi-task learning hardware-software co-design approach underpinned
by the Kria KV260 Multi-Processor System-on-Chip Field Programmable Gate Array (MPSoC-FPGA)
platform. The approach facilitates efficient real-time execution of deep learning algorithms specific to ADAS
applications. Utilizing the BDD100K, KITTI, and CityScapes datasets, our ADAS multi-task learning
system endeavours to provide accurate and efficient multi-object detection, segmentation, and lane and
drivable area detection in road images. The system deploys a segmentation-based object detection strategy,
using a ResNet-18 backbone encoder and a Single Shot Detector architecture, coupled with quantizationaware
training to augment inference performance without compromising accuracy. The ADAS multi-task
learning offers customization options for various ADAS applications and can be further optimized for
increased precision and reduced memory usage. Experimental results showcase the system’s capability to
perform real-time multi-class object detection, segmentation, line detection, and drivable area detection on
road images at approximately 25.4 FPS using a 1920x1080p Full HD camera. Impressively, the quantized
model has demonstrated a 51% mAP for object detection, 56.62% mIoU for image segmentation, 43.86%
mIoU for line detection, and 81.56% IoU for drivable area identification, reinforcing its high efficacy
and precision. The findings underscore that the proposed ADAS multi-task learning system is a practical,
reliable, and effective solution for real-world applications. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | IEEE | en_US |
dc.relation.isversionof | 10.1109/ACCESS.2023.3300379 | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | ADAS | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Deep processing unit | en_US |
dc.subject | Memory allocation | en_US |
dc.subject | Multi-task learning | en_US |
dc.subject | MPSoC-FPGA architecture | en_US |
dc.subject | Vitis-AI | en_US |
dc.subject | Quantization aware training | en_US |
dc.title | Real-Time Multi-Task ADAS Implementation on Reconfigurable Heterogeneous MPSoC Architecture | en_US |
dc.type | article | en_US |
dc.relation.journal | IEEE Access | en_US |
dc.contributor.department | FSM Vakıf Üniversitesi, Mühendislik Fakültesi, Elektrik-Elektronik Mühendisliği Bölümü | en_US |
dc.identifier.volume | 4 | en_US |
dc.identifier.startpage | 1 | en_US |
dc.identifier.endpage | 21 | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.contributor.institutionauthor | Tatar, Güner | |
dc.contributor.institutionauthor | Bayar, Salih | |