Basit öğe kaydını göster

dc.contributor.authorTatar, Güner
dc.contributor.authorBayar, Salih
dc.contributor.authorÇiçek, İhsan
dc.date.accessioned2022-10-21T09:46:00Z
dc.date.available2022-10-21T09:46:00Z
dc.date.issued2022en_US
dc.identifier.citationTATAR, Güner, Salih BAYAR & İhsan ÇİÇEK. "Performance Evaluation of Low-Precision Quantized LeNet and ConvNet Neural Networks". 2022 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), (2022): 1-6.en_US
dc.identifier.urihttps://hdl.handle.net/11352/4188
dc.description.abstractLow-precision neural network models are crucial for reducing the memory footprint and computational density. However, existing methods must have an average of 32-bit floatingpoint (FP32) arithmetic to maintain the accuracy. Floating-point numbers need grave memory requirements in convolutional and deep neural network models. Also, large bit-widths cause too much computational density in hardware architectures. Moreover, existing models must evolve into deeper network models with millions or billions of parameters to solve today’s problems. The large number of model parameters increase the computational complexity and cause memory allocation problems, hence existing hardware accelerators become insufficient to address these problems. In applications where accuracy can be tradedoff for the sake of hardware complexity, quantization of models enable the use of limited hardware resources to implement neural networks. From hardware design point of view, quantized models are more advantageous in terms of speed, memory and power consumption than using FP32. In this study, we compared the training and testing accuracy of the quantized LeNet and our own ConvNet neural network models at different epochs. We quantized the models using low precision int-4, int-8 and int-16. As a result of the tests, we observed that the LeNet model could only reach 63.59% test accuracy at 400 epochs with int-16. On the other hand, the ConvNet model achieved a test accuracy of 76.78% at only 40 epochs with low precision int-8 quantization.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/INISTA55318.2022.9894261en_US
dc.rightsinfo:eu-repo/semantics/embargoedAccessen_US
dc.subjectConvolutional Neural Networksen_US
dc.subjectQuantized Neural Networksen_US
dc.subjectFPGAen_US
dc.subjectHardware Acceleratorsen_US
dc.subjectFloating Point Arithmeticen_US
dc.subjectFixed Point Arithmeticen_US
dc.subjectLeNeten_US
dc.subjectConvNeten_US
dc.titlePerformance Evaluation of Low-Precision Quantized LeNet and ConvNet Neural Networksen_US
dc.typeconferenceObjecten_US
dc.relation.journal2022 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)en_US
dc.contributor.departmentFSM Vakıf Üniversitesi, Mühendislik Fakültesi, Elektrik-Elektronik Mühendisliği Bölümüen_US
dc.identifier.startpage16en_US
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.contributor.institutionauthorTatar, Güner


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster