• Title/Summary/Keyword: Neural Network Architecture

Search Result 757, Processing Time 0.022 seconds

A Design of Neural Network Control Architecture for Robot Motion (로보트 운동을 위한 신경회로망 제어구조의 설계)

  • 이윤섭;구영모;조시형;우광방
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.4
    • /
    • pp.400-410
    • /
    • 1992
  • This paper deals with a design of neural network control architectures for robot motion. Three types of control architectures are designed as follows : 1) a neural network control architecture which has the same characteristics as computed torque method 2) a neural network control architecture for compensating the control error on computed torque method with fixed feedback gain 3) neural network adaptive control architecture. Computer simulation of PUMA manipulator with 6 links is conducted for robot motion in order to examine the proposed neural network control architectures.

  • PDF

A hardware implementation of neural network with modified HANNIBAL architecture (수정된 하니발 구조를 이용한 신경회로망의 하드웨어 구현)

  • 이범엽;정덕진
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.45 no.3
    • /
    • pp.444-450
    • /
    • 1996
  • A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). refs., figs., tabs.

  • PDF

Robust architecture search using network adaptation

  • Rana, Amrita;Kim, Kyung Ki
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.5
    • /
    • pp.290-294
    • /
    • 2021
  • Experts have designed popular and successful model architectures, which, however, were not the optimal option for different scenarios. Despite the remarkable performances achieved by deep neural networks, manually designed networks for classification tasks are the backbone of object detection. One major challenge is the ImageNet pre-training of the search space representation; moreover, the searched network incurs huge computational cost. Therefore, to overcome the obstacle of the pre-training process, we introduce a network adaptation technique using a pre-trained backbone model tested on ImageNet. The adaptation method can efficiently adapt the manually designed network on ImageNet to the new object-detection task. Neural architecture search (NAS) is adopted to adapt the architecture of the network. The adaptation is conducted on the MobileNetV2 network. The proposed NAS is tested using SSDLite detector. The results demonstrate increased performance compared to existing network architecture in terms of search cost, total number of adder arithmetics (Madds), and mean Average Precision(mAP). The total computational cost of the proposed NAS is much less than that of the State Of The Art (SOTA) NAS method.

Architectures of the Parallel, Self-Organizing Hierarchical Neural Networks (병렬 자구성 계층 신경망 (PSHINN)의 구조)

  • 윤영우;문태현;홍대식;강창언
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.1
    • /
    • pp.88-98
    • /
    • 1994
  • A new neural network architecture called the Parallel. Self-Organizing Hierarchical Neural Network (PSHNN) is presented. The new architecture involves a number of stages in which each stage can be a particular neural network (SNN). The experiments performed in comparison to multi-layered network with backpropagation training and indicated the superiority of the new architecture in the sense of classification accuracy, training time,parallelism.

  • PDF

Path-Based Computation Encoder for Neural Architecture Search

  • Yang, Ying;Zhang, Xu;Pan, Hu
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.188-196
    • /
    • 2022
  • Recently, neural architecture search (NAS) has received increasing attention as it can replace human experts in designing the architecture of neural networks for different tasks and has achieved remarkable results in many challenging tasks. In this study, a path-based computation neural architecture encoder (PCE) was proposed. Our PCE first encodes the computation of information on each path in a neural network, and then aggregates the encodings on all paths together through an attention mechanism, simulating the process of information computation along paths in a neural network and encoding the computation on the neural network instead of the structure of the graph, which is more consistent with the computational properties of neural networks. We performed an extensive comparison with eight encoding methods on two commonly used NAS search spaces (NAS-Bench-101 and NAS-Bench-201), which included a comparison of the predictive capabilities of performance predictors and search capabilities based on two search strategies (reinforcement learning-based and Bayesian optimization-based) when equipped with different encoders. Experimental evaluation shows that PCE is an efficient encoding method that effectively ranks and predicts neural architecture performance, thereby improving the search efficiency of neural architectures.

Nonlinear System Modeling Based on Multi-Backpropagation Neural Network (다중 역전파 신경회로망을 이용한 비선형 시스템의 모델링)

  • Baeg, Jae-Huyk;Lee, Jung-Moon
    • Journal of Industrial Technology
    • /
    • v.16
    • /
    • pp.197-205
    • /
    • 1996
  • In this paper, we propose a new neural architecture. We synthesize the architecture from a combination of structures known as MRCCN (Multi-resolution Radial-basis Competitive and Cooperative Network) and BPN (Backpropagation Network). The proposed neural network is able to improve the learning speed of MRCCN and the mapping capability of BPN. The ability and effectiveness of identifying a ninlinear dynamic system using the proposed architecture will be demonstrated by computer simulation.

  • PDF

Bayesian Neural Network with Recurrent Architecture for Time Series Prediction

  • Hong, Chan-Young;Park, Jung-Hun;Yoon, Tae-Sung;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.631-634
    • /
    • 2004
  • In this paper, the Bayesian recurrent neural network (BRNN) is proposed to predict time series data. Among the various traditional prediction methodologies, a neural network method is considered to be more effective in case of non-linear and non-stationary time series data. A neural network predictor requests proper learning strategy to adjust the network weights, and one need to prepare for non-linear and non-stationary evolution of network weights. The Bayesian neural network in this paper estimates not the single set of weights but the probability distributions of weights. In other words, we sets the weight vector as a state vector of state space method, and estimates its probability distributions in accordance with the Bayesian inference. This approach makes it possible to obtain more exact estimation of the weights. Moreover, in the aspect of network architecture, it is known that the recurrent feedback structure is superior to the feedforward structure for the problem of time series prediction. Therefore, the recurrent network with Bayesian inference, what we call BRNN, is expected to show higher performance than the normal neural network. To verify the performance of the proposed method, the time series data are numerically generated and a neural network predictor is applied on it. As a result, BRNN is proved to show better prediction result than common feedforward Bayesian neural network.

  • PDF

Combination Tandem Architecture with Segmental Features for Robust Speech Recognition (강인한 음성 인식을 위한 탠덤 구조와 분절 특징의 결합)

  • Yun, Young-Sun;Lee, Yun-Keun
    • MALSORI
    • /
    • no.62
    • /
    • pp.113-131
    • /
    • 2007
  • It is reported that the segmental feature based recognition system shows better results than conventional feature based system in the previous studies. On the other hand, the various studies of combining neural network and hidden Markov models within a single system are done with expectations that it may potentially combine the advantages of both systems. With the influence of these studies, tandem approach was presented to use neural network as the classifier and hidden Markov models as the decoder. In this paper, we applied the trend information of segmental features to tandem architecture and used posterior probabilities, which are the output of neural network, as inputs of recognition system. The experiments are performed on Auroral database to examine the potentiality of the trend feature based tandem architecture. From the results, the proposed system outperforms on very low SNR environments. Consequently, we argue that the trend information on tandem architecture can be additionally used for traditional MFCC features.

  • PDF

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

Tuning the Architecture of Neural Networks for Multi-Class Classification (다집단 분류 인공신경망 모형의 아키텍쳐 튜닝)

  • Jeong, Chulwoo;Min, Jae H.
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.38 no.1
    • /
    • pp.139-152
    • /
    • 2013
  • The purpose of this study is to claim the validity of tuning the architecture of neural network models for multi-class classification. A neural network model for multi-class classification is basically constructed by building a series of neural network models for binary classification. Building a neural network model, we are required to set the values of parameters such as number of hidden nodes and weight decay parameter in advance, which draws special attention as the performance of the model can be quite different by the values of the parameters. For better performance of the model, it is absolutely necessary to have a prior process of tuning the parameters every time the neural network model is built. Nonetheless, previous studies have not mentioned the necessity of the tuning process or proved its validity. In this study, we claim that we should tune the parameters every time we build the neural network model for multi-class classification. Through empirical analysis using wine data, we show that the performance of the model with the tuned parameters is superior to those of untuned models.