Basit öğe kaydını göster

dc.contributor.authorErenoğlu, Ayşe Kübra
dc.contributor.authorTatar, Güner
dc.date.accessioned2023-08-04T11:22:34Z
dc.date.available2023-08-04T11:22:34Z
dc.date.issued2023en_US
dc.identifier.citationERENOĞLU, Ayşe Kübra & Güner TATAR. "Real-Time Hardware Acceleration of Low Precision Quantized Custom Neural Network Model on ZYNQ SoC". 5th International Congress on Human-Computer Interaction, Optimization and Robotic Applications, HORA 2023, (2023): 1-6.en_US
dc.identifier.urihttps://hdl.handle.net/11352/4631
dc.description.abstractAchieving a lower memory footprint and reduced computational density in neural network models requires the use of low-precision models. However, existing techniques typically rely on floating-point arithmetic to preserve accuracy, which can be problematic for convolutional neural network models (CNNs) with substantial memory requirements when using floating-point numbers. Additionally, larger bit widths lead to higher computational density in hardware architectures. This has resulted in the need for current models to become deeper network models with sometimes billions, of parameters to address contemporary problems, thereby increasing computational complexity and causing memory allocation issues. These challenges render existing hardware accelerators insufficient. In scenarios where hardware complexity can be traded-off for accuracy, the adoption of model quantization enables the utilization of limited hardware resources for implementing neural networks. From a hardware design standpoint, employing quantized models offers notable advantages in terms of speed, memory utilization, and power consumption compared to traditional floating-point arithmetic. To this end, we propose a method for detecting network intrusions by quantizing weight and activation functions using the Brevitas library in a custom multi-layer detector. We conducted real-time experimentation of the technique on the ZYNQ System-on-Chip (SoC) using the FINN framework, which enabled deep neural network extraction within Field Programmable Gate Arrays (FPGAs), resulting in an accuracy rate of approximately 92%. We selected the UNSW-NB15 dataset, which was generated by the Australian Cyber Security Center (ACCS), for the investigation.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/HORA58378.2023.10155783en_US
dc.rightsinfo:eu-repo/semantics/embargoedAccessen_US
dc.subjectFINN Experimental frameworken_US
dc.subjectQuantization aware trainingen_US
dc.subjectSystem on chip field programmable gate arraysen_US
dc.subjectLow precision arithmeticen_US
dc.titleReal-Time Hardware Acceleration of Low Precision Quantized Custom Neural Network Model on ZYNQ SoCen_US
dc.typeconferenceObjecten_US
dc.relation.journal5th International Congress on Human-Computer Interaction, Optimization and Robotic Applications, HORA 2023en_US
dc.contributor.departmentFSM Vakıf Üniversitesi, Mühendislik Fakültesi, Elektrik-Elektronik Mühendisliği Bölümüen_US
dc.contributor.authorIDhttps://orcid.org/0000-0002-9578-6194en_US
dc.contributor.authorIDhttps://orcid.org/0000-0002-3664-1366en_US
dc.identifier.startpage1en_US
dc.identifier.endpage6en_US
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.contributor.institutionauthorErenoğlu, Ayşe Kübra
dc.contributor.institutionauthorTatar, Güner


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster