• 제목/요약/키워드: Neural Network Architecture

검색결과 757건 처리시간 0.028초

Automated optimization for memory-efficient high-performance deep neural network accelerators

  • Kim, HyunMi;Lyuh, Chun-Gi;Kwon, Youngsu
    • ETRI Journal
    • /
    • 제42권4호
    • /
    • pp.505-517
    • /
    • 2020
  • The increasing size and complexity of deep neural networks (DNNs) necessitate the development of efficient high-performance accelerators. An efficient memory structure and operating scheme provide an intuitive solution for high-performance accelerators along with dataflow control. Furthermore, the processing of various neural networks (NNs) requires a flexible memory architecture, programmable control scheme, and automated optimizations. We first propose an efficient architecture with flexibility while operating at a high frequency despite the large memory and PE-array sizes. We then improve the efficiency and usability of our architecture by automating the optimization algorithm. The experimental results show that the architecture increases the data reuse; a diagonal write path improves the performance by 1.44× on average across a wide range of NNs. The automated optimizations significantly enhance the performance from 3.8× to 14.79× and further provide usability. Therefore, automating the optimization as well as designing an efficient architecture is critical to realizing high-performance DNN accelerators.

유전 알고리즘 기반의 심층 학습 신경망 구조와 초모수 최적화 (Genetic algorithm based deep learning neural network structure and hyperparameter optimization)

  • 이상협;강도영;박장식
    • 한국멀티미디어학회논문지
    • /
    • 제24권4호
    • /
    • pp.519-527
    • /
    • 2021
  • Alzheimer's disease is one of the challenges to tackle in the coming aging era and is attempting to diagnose and predict through various biomarkers. While the application of various deep learning-based technologies as powerful imaging technologies has recently expanded across the medical industry, empirical design is not easy because there are various deep earning neural networks architecture and categorical hyperparameters that rely on problems and data to solve. In this paper, we show the possibility of optimizing a deep learning neural network structure and hyperparameters for Alzheimer's disease classification in amyloid brain images in a representative deep earning neural networks architecture using genetic algorithms. It was observed that the optimal deep learning neural network structure and hyperparameter were chosen as the values of the experiment were converging.

코호넨의 자기조직화 구조를 이용한 클러스터링 망에 관한 연구 (On the Clustering Networks using the Kohonen's Elf-Organization Architecture)

  • 이지영
    • 정보학연구
    • /
    • 제8권1호
    • /
    • pp.119-124
    • /
    • 2005
  • Learning procedure in the neural network is updating of weights between neurons. Unadequate initial learning coefficient causes excessive iterations of learning process or incorrect learning results and degrades learning efficiency. In this paper, adaptive learning algorithm is proposed to increase the efficient in the learning algorithms of Kohonens Self-Organization Neural networks. The algorithm updates the weights adaptively when learning procedure runs. To prove the efficiency the algorithm is experimented to clustering of the random weight. The result shows improved learning rate about 42~55% ; less iteration counts with correct answer.

  • PDF

자기 분열 및 구조화 신경 회로망 (A self creating and organizing neural network)

  • 최두일;박상희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1991년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 22-24 Oct. 1991
    • /
    • pp.768-772
    • /
    • 1991
  • The Self Creating and organizing (SCO) is a new architecture and one of the unsupervized learning algorithm for the artificial neural network. SCO begins with only one output node which has a sufficiently wide response range, and the response ranges of all the nodes decrease with time. Self Creating and Organizing Neural Network (SCONN) decides automatically whether adapting the weights of existing node or creating a new node. It is compared to the Kohonen's Self Organizing Feature Map (SOFM). The results show that SCONN has lots of advantages over other competitive learning architecture.

  • PDF

유전자 알고리즘을 이용한 신경 회로망 성능향상에 관한 연구 (A study on Performance Improvement of Neural Networks Using Genetic algorithms)

  • 임정은;김해진;장병찬;서보혁
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 제37회 하계학술대회 논문집 D
    • /
    • pp.2075-2076
    • /
    • 2006
  • In this paper, we propose a new architecture of Genetic Algorithms(GAs)-based Backpropagation(BP). The conventional BP does not guarantee that the BP generated through learning has the optimal network architecture. But the proposed GA-based BP enable the architecture to be a structurally more optimized network, and to be much more flexible and preferable neural network than the conventional BP. The experimental results in BP neural network optimization show that this algorithm can effectively avoid BP network converging to local optimum. It is found by comparison that the improved genetic algorithm can almost avoid the trap of local optimum and effectively improve the convergent speed.

  • PDF

무인기를 이용한 심층 신경망 기반 해파리 분포 인식 시스템 (Deep Neural Network-based Jellyfish Distribution Recognition System Using a UAV)

  • 구정모;명현
    • 로봇학회논문지
    • /
    • 제12권4호
    • /
    • pp.432-440
    • /
    • 2017
  • In this paper, we propose a jellyfish distribution recognition and monitoring system using a UAV (unmanned aerial vehicle). The UAV was designed to satisfy the requirements for flight in ocean environment. The target jellyfish, Aurelia aurita, is recognized through convolutional neural network and its distribution is calculated. The modified deep neural network architecture has been developed to have reliable recognition accuracy and fast operation speed. Recognition speed is about 400 times faster than GoogLeNet by using a lightweight network architecture. We also introduce the method for selecting candidates to be used as inputs to the proposed network. The recognition accuracy of the jellyfish is improved by removing the probability value of the meaningless class among the probability vectors of the evaluated input image and re-evaluating it by normalization. The jellyfish distribution is calculated based on the unit jellyfish image recognized. The distribution level is defined by using the novelty concept of the distribution map buffer.

휴머노이드 로봇의 뉴럴네트워크 제어 (Neural Network Control of Humanoid Robot)

  • 김동원;김낙현;박귀태
    • 제어로봇시스템학회논문지
    • /
    • 제16권10호
    • /
    • pp.963-968
    • /
    • 2010
  • This paper handles ZMP based control that is inspired by neural networks for humanoid robot walking on varying sloped surfaces. Humanoid robots are currently one of the most exciting research topics in the field of robotics, and maintaining stability while they are standing, walking or moving is a key concern. To ensure a steady and smooth walking gait of such robots, a feedforward type of neural network architecture, trained by the back propagation algorithm is employed. The inputs and outputs of the neural network architecture are the ZMPx and ZMPy errors of the robot, and the x, y positions of the robot, respectively. The neural network developed allows the controller to generate the desired balance of the robot positions, resulting in a steady gait for the robot as it moves around on a flat floor, and when it is descending slope. In this paper, experiments of humanoid robot walking are carried out, in which the actual position data from a prototype robot are measured in real time situations, and fed into a neural network inspired controller designed for stable bipedal walking.

로봇 매니퓰레이터의 퍼지논리 제어를 위한 신경회로망을 사용한 규칙 베이스 유도방법 (A rule base derivation method using neural networks for the fuzzy logic control of robot manipulators)

  • 이석원;경계현;김대원;이범희;고명삼
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1992년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 19-21 Oct. 1992
    • /
    • pp.441-446
    • /
    • 1992
  • We propose a control architecture for the fuzzy logic control of robot manipulators and a rule base derivation method for a fuzzy logic controller(FLC) using a neural network. The control architecture is composed of FLC and PD(positional Derivative) controller. And a neural network is designed in consideration of the FLC's structure. After the training is finished by BP(Back Propagation) and FEL(Feedback Error Learning) method, the rule base is derived from the neural network and is reduced through two stages - smoothing, logical reduction. Also, we show the performance of the control architecture through the simulation to verify the effectiveness of our proposed method.

  • PDF

선상가열시 강판의 변형 추정도구 개발을 위한 기초연구 (A Study of the Development of a simulator for Deformation of the Steel Plate in Line Heating)

  • 서도원;양박달치
    • 한국해양공학회:학술대회논문집
    • /
    • 한국해양공학회 2006년 창립20주년기념 정기학술대회 및 국제워크샵
    • /
    • pp.213-216
    • /
    • 2006
  • During the last decade several different methods have been proposed for the estimation of thermal deformations in the line heating process. These are mainly based on the assumption of residual strains in the heat-affected zone or simulated relations between heating conditions and residual deformations. However these results were restricted in the application from the too simplified heating conditions or the shortage of the data. The purpose of this paper is to develop a simulator of thermal deformation in the line heating using the artificial neural network. Two neural network predicting the maximum temperature and deformations at the heating line are studied. Deformation data from the line heating experiments are used for learning data for the network. It was observed that thermal deformation predicted by the neural network correlate well with the experimental result.

  • PDF

Transformer를 활용한 인공신경망의 경량화 알고리즘 및 하드웨어 가속 기술 동향 (Trends in Lightweight Neural Network Algorithms and Hardware Acceleration Technologies for Transformer-based Deep Neural Networks)

  • 김혜지;여준기
    • 전자통신동향분석
    • /
    • 제38권5호
    • /
    • pp.12-22
    • /
    • 2023
  • The development of neural networks is evolving towards the adoption of transformer structures with attention modules. Hence, active research focused on extending the concept of lightweight neural network algorithms and hardware acceleration is being conducted for the transition from conventional convolutional neural networks to transformer-based networks. We present a survey of state-of-the-art research on lightweight neural network algorithms and hardware architectures to reduce memory usage and accelerate both inference and training. To describe the corresponding trends, we review recent studies on token pruning, quantization, and architecture tuning for the vision transformer. In addition, we present a hardware architecture that incorporates lightweight algorithms into artificial intelligence processors to accelerate processing.