International Journal of Advance Computational Engineering and Networking (IJACEN)
.
Follow Us On :
current issues
Volume-12,Issue-1  ( Jan, 2024 )
Past issues
  1. Volume-11,Issue-12  ( Dec, 2023 )
  2. Volume-11,Issue-11  ( Nov, 2023 )
  3. Volume-11,Issue-10  ( Oct, 2023 )
  4. Volume-11,Issue-9  ( Sep, 2023 )
  5. Volume-11,Issue-8  ( Aug, 2023 )
  6. Volume-11,Issue-7  ( Jul, 2023 )
  7. Volume-11,Issue-6  ( Jun, 2023 )
  8. Volume-11,Issue-5  ( May, 2023 )
  9. Volume-11,Issue-4  ( Apr, 2023 )
  10. Volume-11,Issue-3  ( Mar, 2023 )

Statistics report
Apr. 2024
Submitted Papers : 80
Accepted Papers : 10
Rejected Papers : 70
Acc. Perc : 12%
Issue Published : 133
Paper Published : 1552
No. of Authors : 4025
  Journal Paper


Paper Title :
An FPGA based Hardware Accelerator with Binary Weights for Deep Neural Networks

Author :Sreehari R., Deepu Vijayasenan, Arulalan Rajan

Article Citation :Sreehari R. ,Deepu Vijayasenan ,Arulalan Rajan , (2018 ) " An FPGA based Hardware Accelerator with Binary Weights for Deep Neural Networks " , International Journal of Advance Computational Engineering and Networking (IJACEN) , pp. 32-36, Volume-6, Issue-10

Abstract : The paper describes the implementation of systolic array based hardware accelerator for multilayer perceptrons (MLP) on FPGA. Full precision hardware implementation of neural network increases resource utilization. Therefore, it is difficult to fit large neural networks on FPGA. Moreover, these implementations have high power consumption. Neural networks are implemented with numerous Multiply and Accumulate (MAC) units. The multipliers in these MAC units are expensive in terms of power. Algorithms have been proposed which quantize the weights and eliminate the need of multipliers in a neural network without compromising much on classification accuracy. The algorithms replace MAC units with simple accumulators. Quantized weights minimize the weight storage requirements. A systolic array based architecture of neural network has been implemented on FPGA. The architecture has been modified according to Binary Connect algorithm which quantizes the weights into two levels. All the implementations have been verified with MNIST dataset. Classification accuracy of hardware implementations has been found comparable with its software counterparts. The designed hardware accelerator has achieved reduction in resource utilization by 12.6 times compared to the basic hardware implementation of neural network with high precision weights, inputs and normal MAC units. The power consumption also has got reduced by half and the delay of critical path decreased by 2.4 times. Thus, larger neural networks can be implemented on FPGA that can run at high frequencies with less power. Keywords - Hardware Accelerator, Systolic Array, Deep Neural Networks

Type : Research paper

Published : Volume-6, Issue-10


DOIONLINE NO - IJACEN-IRAJ-DOIONLINE-13834   View Here

Copyright: © Institute of Research and Journals

| PDF |
Viewed - 69
| Published on 2018-12-22
   
   
IRAJ Other Journals
IJACEN updates
Paper Submission is open now for upcoming Issue.
The Conference World

JOURNAL SUPPORTED BY