• Türkçe
    • English
  • English 
    • Türkçe
    • English
  • Login
View Item 
  •   FSM Vakıf
  • Fakülteler / Faculties
  • Mühendislik Fakültesi / Faculty of Engineering
  • Elektrik-Elektronik Mühendisliği Bölümü
  • View Item
  •   FSM Vakıf
  • Fakülteler / Faculties
  • Mühendislik Fakültesi / Faculty of Engineering
  • Elektrik-Elektronik Mühendisliği Bölümü
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Performance Evaluation of Low-Precision Quantized LeNet and ConvNet Neural Networks

Thumbnail

View/Open

Konferans Öğesi (4.619Mb)

Access

info:eu-repo/semantics/embargoedAccess

Date

2022

Author

Tatar, Güner
Bayar, Salih
Çiçek, İhsan

Metadata

Show full item record

Citation

TATAR, Güner, Salih BAYAR & İhsan ÇİÇEK. "Performance Evaluation of Low-Precision Quantized LeNet and ConvNet Neural Networks". 2022 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), (2022): 1-6.

Abstract

Low-precision neural network models are crucial for reducing the memory footprint and computational density. However, existing methods must have an average of 32-bit floatingpoint (FP32) arithmetic to maintain the accuracy. Floating-point numbers need grave memory requirements in convolutional and deep neural network models. Also, large bit-widths cause too much computational density in hardware architectures. Moreover, existing models must evolve into deeper network models with millions or billions of parameters to solve today’s problems. The large number of model parameters increase the computational complexity and cause memory allocation problems, hence existing hardware accelerators become insufficient to address these problems. In applications where accuracy can be tradedoff for the sake of hardware complexity, quantization of models enable the use of limited hardware resources to implement neural networks. From hardware design point of view, quantized models are more advantageous in terms of speed, memory and power consumption than using FP32. In this study, we compared the training and testing accuracy of the quantized LeNet and our own ConvNet neural network models at different epochs. We quantized the models using low precision int-4, int-8 and int-16. As a result of the tests, we observed that the LeNet model could only reach 63.59% test accuracy at 400 epochs with int-16. On the other hand, the ConvNet model achieved a test accuracy of 76.78% at only 40 epochs with low precision int-8 quantization.

Source

2022 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)

URI

https://hdl.handle.net/11352/4188

Collections

  • Elektrik-Elektronik Mühendisliği Bölümü [67]
  • Scopus İndeksli Yayınlar / Scopus Indexed Publications [630]



DSpace software copyright © 2002-2015  DuraSpace
Contact Us | Send Feedback
Theme by 
@mire NV
 

 




| Policy | Guide | Contact |

DSpace@FSM

by OpenAIRE
Advanced Search

sherpa/romeo

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsTypeLanguageDepartmentCategoryPublisherAccess TypeInstitution AuthorThis CollectionBy Issue DateAuthorsTitlesSubjectsTypeLanguageDepartmentCategoryPublisherAccess TypeInstitution Author

My Account

LoginRegister

Statistics

View Google Analytics Statistics

DSpace software copyright © 2002-2015  DuraSpace
Contact Us | Send Feedback
Theme by 
@mire NV
 

 


|| Policy || Guide || Library || FSM Vakıf University || OAI-PMH ||

FSM Vakıf University, İstanbul, Turkey
If you find any errors in content, please contact:

Creative Commons License
FSM Vakıf University Institutional Repository is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License..

DSpace@FSM:


DSpace 6.2

tarafından İdeal DSpace hizmetleri çerçevesinde özelleştirilerek kurulmuştur.