• Title/Summary/Keyword: BSPE

Search Result 2, Processing Time 0.019 seconds

Implementation of low power BSPE Core for deep learning hardware accelerators (딥러닝을 하드웨어 가속기를 위한 저전력 BSPE Core 구현)

  • Jo, Cheol-Won;Lee, Kwang-Yeob;Nam, Ki-Hun
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.895-900
    • /
    • 2020
  • In this paper, BSPE replaced the existing multiplication algorithm that consumes a lot of power. Hardware resources are reduced by using a bit-serial multiplier, and variable integer data is used to reduce memory usage. In addition, MOA resource usage and power usage were reduced by applying LOA (Lower-part OR Approximation) to MOA (Multi Operand Adder) used to add partial sums. Therefore, compared to the existing MBS (Multiplication by Barrel Shifter), hardware resource reduction of 44% and power consumption of 42% were reduced. Also, we propose a hardware architecture design for BSPE Core.

Low-area DNN Core using data reuse technique (데이터 재사용 기법을 이용한 저 면적 DNN Core)

  • Jo, Cheol-Won;Lee, Kwang-Yeob;Kim, Chi-Yong
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.229-233
    • /
    • 2021
  • NPU in an embedded environment performs deep learning algorithms with few hardware resources. By using a technique that reuses data, deep learning algorithms can be efficiently computed with fewer resources. In previous studies, data is reused using a shifter in ScratchPad for data reuse. However, as the ScratchPad's bandwidth increases, the shifter also consumes a lot of resources. Therefore, we present a data reuse technique using the Buffer Round Robin method. By using the Buffer Round Robin method presented in this paper, the chip area could be reduced by about 4.7% compared to the conventional method.