• Title/Summary/Keyword: Neural Processing Unit (NPU)

Search Result 8, Processing Time 0.022 seconds

Cycle-accurate NPU Simulator and Performance Evaluation According to Data Access Strategies (Cycle-accurate NPU 시뮬레이터 및 데이터 접근 방식에 따른 NPU 성능평가)

  • Kwon, Guyun;Park, Sangwoo;Suh, Taeweon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.217-228
    • /
    • 2022
  • Currently, there are increasing demands for applying deep neural networks (DNNs) in the embedded domain such as classification and object detection. The DNN processing in embedded domain often requires custom hardware such as NPU for acceleration due to the constraints in power, performance, and area. Processing DNN models requires a large amount of data, and its seamless transfer to NPU is crucial for performance. In this paper, we developed a cycle-accurate NPU simulator to evaluate diverse NPU microarchitectures. In addition, we propose a novel technique for reducing the number of memory accesses when processing convolutional layers in convolutional neural networks (CNNs) on the NPU. The main idea is to reuse data with memory interleaving, which recycles the overlapping data between previous and current input windows. Data memory interleaving makes it possible to quickly read consecutive data in unaligned locations. We implemented the proposed technique to the cycle-accurate NPU simulator and measured the performance with LeNet-5, VGGNet-16, and ResNet-50. The experiment shows up to 2.08x speedup in processing one convolutional layer, compared to the baseline.

A layer-wise frequency scaling for a neural processing unit

  • Chung, Jaehoon;Kim, HyunMi;Shin, Kyoungseon;Lyuh, Chun-Gi;Cho, Yong Cheol Peter;Han, Jinho;Kwon, Youngsu;Gong, Young-Ho;Chung, Sung Woo
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.849-858
    • /
    • 2022
  • Dynamic voltage frequency scaling (DVFS) has been widely adopted for runtime power management of various processing units. In the case of neural processing units (NPUs), power management of neural network applications is required to adjust the frequency and voltage every layer to consider the power behavior and performance of each layer. Unfortunately, DVFS is inappropriate for layer-wise run-time power management of NPUs due to the long latency of voltage scaling compared with each layer execution time. Because the frequency scaling is fast enough to keep up with each layer, we propose a layerwise dynamic frequency scaling (DFS) technique for an NPU. Our proposed DFS exploits the highest frequency under the power limit of an NPU for each layer. To determine the highest allowable frequency, we build a power model to predict the power consumption of an NPU based on a real measurement on the fabricated NPU. Our evaluation results show that our proposed DFS improves frame per second (FPS) by 33% and saves energy by 14% on average, compared with DVFS.

Road Image Recognition Technology based on Deep Learning Using TIDL NPU in SoC Enviroment (SoC 환경에서 TIDL NPU를 활용한 딥러닝 기반 도로 영상 인식 기술)

  • Yunseon Shin;Juhyun Seo;Minyoung Lee;Injung Kim
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.25-31
    • /
    • 2022
  • Deep learning-based image processing is essential for autonomous vehicles. To process road images in real-time in a System-on-Chip (SoC) environment, we need to execute deep learning models on a NPU (Neural Procesing Units) specialized for deep learning operations. In this study, we imported seven open-source image processing deep learning models, that were developed on GPU servers, to Texas Instrument Deep Learning (TIDL) NPU environment. We confirmed that the models imported in this study operate normally in the SoC virtual environment through performance evaluation and visualization. This paper introduces the problems that occurred during the migration process due to the limitations of NPU environment and how to solve them, and thereby, presents a reference case worth referring to for developers and researchers who want to port deep learning models to SoC environments.

Optimizing 2-stage Tiling-based Matrix Multiplication in FPGA-based Neural Network Accelerator (FPGA기반 뉴럴네트워크 가속기에서 2차 타일링 기반 행렬 곱셈 최적화)

  • Jinse, Kwon;Jemin, Lee;Yongin, Kwon;Jeman, Park;Misun, Yu;Taeho, Kim;Hyungshin, Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.367-374
    • /
    • 2022
  • The acceleration of neural networks has become an important topic in the field of computer vision. An accelerator is absolutely necessary for accelerating the lightweight model. Most accelerator-supported operators focused on direct convolution operations. If the accelerator does not provide GEMM operation, it is mostly replaced by CPU operation. In this paper, we proposed an optimization technique for 2-stage tiling-based GEMM routines on VTA. We improved performance of the matrix multiplication routine by maximizing the reusability of the input matrix and optimizing the operation pipelining. In addition, we applied the proposed technique to the DarkNet framework to check the performance improvement of the matrix multiplication routine. The proposed GEMM method showed a performance improvement of more than 2.4 times compared to the non-optimized GEMM method. The inference performance of our DarkNet framework has also improved by at least 2.3 times.

Trends of Low-Precision Processing for AI Processor (NPU 반도체를 위한 저정밀도 데이터 타입 개발 동향)

  • Kim, H.J.;Han, J.H.;Kwon, Y.S.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.1
    • /
    • pp.53-62
    • /
    • 2022
  • With increasing size of transformer-based neural networks, a light-weight algorithm and efficient AI accelerator has been developed to train these huge networks in practical design time. In this article, we present a survey of state-of-the-art research on the low-precision computational algorithms especially for floating-point formats and their hardware accelerator. We describe the trends by focusing on the work of two leading research groups-IBM and Seoul National University-which have deep knowledge in both AI algorithm and hardware architecture. For the low-precision algorithm, we summarize two efficient floating-point formats (hybrid FP8 and radix-4 FP4) with accuracy-preserving algorithms for training on the main research stream. Moreover, we describe the AI processor architecture supporting the low-bit mixed precision computing unit including the integer engine.

Development of Artificial Intelligence Processing Embedded System for Rescue Requester search (소방관의 요구조자 탐색을 위한 인공지능 처리 임베디드 시스템 개발)

  • La, Jong-Pil;Park, Hyun Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1612-1617
    • /
    • 2020
  • Recently, research to reduce the accident rate by actively adopting artificial intelligence technology in the field of disaster safety technology is spreading. In particular, it is important to quickly search the Rescue Requester in order to effectively perform rescue activities at the disaster site. However, it is difficult to search for Rescue Requester due to the nature of the disaster environment. In this paper, We intend to develop an artificial intelligence system that can be operated in a smart helmet for firefighters to search for a rescue requester. To this end, the optimal SoC was selected and developed as an embedded system, and by testing a general-purpose artificial intelligence S/W, the embedded system for future smart helmet research was verified to be suitable as an artificial intelligence S/W operating platform.

A study on data collection environment and analysis using virtual server hosting of Azure cloud platform (Azure 클라우드 플랫폼의 가상서버 호스팅을 이용한 데이터 수집환경 및 분석에 관한 연구)

  • Lee, Jaekyu;Cho, Inpyo;Lee, Sangyub
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.329-330
    • /
    • 2020
  • 본 논문에서는 Azure 클라우드 플랫폼의 가상서버 호스팅을 이용해 데이터 수집 환경을 구축하고, Azure에서 제공하는 자동화된 기계학습(Automated Machine Learning, AutoML)을 기반으로 데이터 분석 방법에 관한 연구를 수행했다. 가상 서버 호스팅 환경에 LAMP(Linux, Apache, MySQL, PHP)를 설치하여 데이터 수집환경을 구축했으며, 수집된 데이터를 Azure AutoML에 적용하여 자동화된 기계학습을 수행했다. Azure AutoML은 소모적이고 반복적인 기계학습 모델 개발을 자동화하는 프로세스로써 기계학습 솔루션 구현하는데 시간과 자원(Resource)를 절약할 수 있다. 특히, AutoML은 수집된 데이터를 분류와 회귀 및 예측하는데 있어서 학습점수(Training Score)를 기반으로 보유한 데이터에 가장 적합한 기계학습 모델의 순위를 제공한다. 이는 데이터 분석에 필요한 기계학습 모델을 개발하는데 있어서 개발 초기 단계부터 코드를 설계하지 않아도 되며, 전체 기계학습 시스템을 개발 및 구현하기 전에 모델의 구성과 시스템을 설계해볼 수 있기 때문에 매우 효율적으로 활용될 수 있다. 본 논문에서는 NPU(Neural Processing Unit) 학습에 필요한 데이터 수집 환경에 관한 연구를 수행했으며, Azure AutoML을 기반으로 데이터 분류와 회귀 등 가장 효율적인 알고리즘 선정에 관한 연구를 수행했다.

  • PDF

Research Trends in Domestic and International Al chips (국내외 인공지능 반도체에 대한 연구 동향 )

  • Hyun Ji Kim;Se Young Yoon;Hwa Jeong Seo
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.36-44
    • /
    • 2024
  • Recently, large-scale artificial intelligence (AI) such as ChatGPT have been developed, and as AI is used across various industrial fields, attention is focused on AI chips (semiconductors). AI chips refer to chips designed for calculations for AI algorithms, and many companies at domestic and abroad, such as NVIDIA, Tesla, and ETRI, are developing AI chips. In this paper, we survey research trends on nine types of AI chips. Currently, many attempts have been made to improve the computational performance of most AI chips, and semiconductors for specific purposes are also being designed. In order to compare various AI semiconductors, each chip is analyzed in terms of operation unit, speed, power, and energy efficiency. We introduce currently existing optimization methodologies for AI computation. Based on this, future research directions for AI semiconductors are presented in this paper.