• Title/Summary/Keyword: Neural networks learning

Search Result 1,869, Processing Time 0.026 seconds

Artificial Neural Network: Understanding the Basic Concepts without Mathematics

  • Han, Su-Hyun;Kim, Ko Woon;Kim, SangYun;Youn, Young Chul
    • Dementia and Neurocognitive Disorders
    • /
    • v.17 no.3
    • /
    • pp.83-89
    • /
    • 2018
  • Machine learning is where a machine (i.e., computer) determines for itself how input data is processed and predicts outcomes when provided with new data. An artificial neural network is a machine learning algorithm based on the concept of a human neuron. The purpose of this review is to explain the fundamental concepts of artificial neural networks.

Using Classification function to integrate Discriminant Analysis, Logistic Regression and Backpropagation Neural Networks for Interest Rates Forecasting

  • Oh, Kyong-Joo;Ingoo Han
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2000.11a
    • /
    • pp.417-426
    • /
    • 2000
  • This study suggests integrated neural network models for Interest rate forecasting using change-point detection, classifiers, and classification functions based on structural change. The proposed model is composed of three phases with tee-staged learning. The first phase is to detect successive and appropriate structural changes in interest rare dataset. The second phase is to forecast change-point group with classifiers (discriminant analysis, logistic regression, and backpropagation neural networks) and their. combined classification functions. The fecal phase is to forecast the interest rate with backpropagation neural networks. We propose some classification functions to overcome the problems of two-staged learning that cannot measure the performance of the first learning. Subsequently, we compare the structured models with a neural network model alone and, in addition, determine which of classifiers and classification functions can perform better. This article then examines the predictability of the proposed classification functions for interest rate forecasting using structural change.

  • PDF

Center estimation of the n-fold engineering parts using self organizing neural networks with generating and merge learning (뉴런의 생성 및 병합 학습 기능을 갖는 자기 조직화 신경망을 이용한 n-각형 공업용 부품의 중심추정)

  • 성효경;최흥문
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.11
    • /
    • pp.95-103
    • /
    • 1997
  • A robust center estimation tecnique of n-fold engineering parts is presented, which use self-organizing neural networks with generating and merging learning for training neural units. To estimate the center of the n-fold engineering parts using neural networks, the segmented boundaries of the interested part are approximated to strainght lines, and the temporal estimated centers by thecosine theorem which formed between the approximaged straight line and the reference point, , are indexed as (.sigma.-.theta.) parameteric vecstors. Then the entries of parametric vectors are fed into self-organizing nerual network. Finally, the center of the n-fold part is extracted by mean of generating and merging learning of the neurons. To accelerate the learning process, neural network uses an adaptive learning rate function to the merging process and a self-adjusting activation to generating process. Simulation results show that the centers of n-fold engineering parts are effectively estimated by proposed technique, though not knowing the error distribution of estimated centers and having less information of boundaries.

  • PDF

Self-organized Distributed Networks for Precise Modelling of a System (시스템의 정밀 모델링을 위한 자율분산 신경망)

  • Kim, Hyong-Suk;Choi, Jong-Soo;Kim, Sung-Joong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.11
    • /
    • pp.151-162
    • /
    • 1994
  • A new neural network structure called Self-organized Distributed Networks (SODN) is proposed for developing the neural network-based multidimensional system models. The learning with the proposed networks is fast and precise. Such properties are caused from the local learning mechanism. The structure of the networks is combination of dual networks such as self-organized networks and multilayered local networks. Each local networks learns only data in a sub-region. Large number of memory requirements and low generalization capability for the untrained region, which are drawbacks of conventional local network learning, are overcomed in the proposed networks. The simulation results of the proposed networks show better performance than the standard multilayer neural networks and the Radial Basis function(RBF) networks.

  • PDF

Design of PID Type servo controller using Neural networks and it′s Implementation (신경회로망을 이용한 이득 자동조정 서보제어기 설계 및 구현)

  • 이상욱;김한실
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.229-229
    • /
    • 2000
  • Conventional gain-tuning methods such as Ziegler-Nickels methods, have many disadvantages that optimal control ler gain should be tuned manually. In this paper, modified PID controllers which include self-tuning characteristics are proposed. Proposed controllers automatically tune the PID gains in on-1ine using neural networks. A new learning scheme was proposed for improving learning speed in neural networks and satisfying the real time condition. In this paper, using a nonlinear mapping capability of neural networks, we derive a tuning method of PID controller based on a Back propagation(BP)method of multilayered neural networks. Simulated and experimental results show that the proposed method can give the appropriate parameters of PID controller when it is implemented to DC Motor.

  • PDF

Hybrid Neural Networks for Pattern Recognition

  • Kim, Kwang-Baek
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.6
    • /
    • pp.637-640
    • /
    • 2011
  • The hybrid neural networks have characteristics such as fast learning times, generality, and simplicity, and are mainly used to classify learning data and to model non-linear systems. The middle layer of a hybrid neural network clusters the learning vectors by grouping homogenous vectors in the same cluster. In the clustering procedure, the homogeneity between learning vectors is represented as the distance between the vectors. Therefore, if the distances between a learning vector and all vectors in a cluster are smaller than a given constant radius, the learning vector is added to the cluster. However, the usage of a constant radius in clustering is the primary source of errors and therefore decreases the recognition success rate. To improve the recognition success rate, we proposed the enhanced hybrid network that organizes the middle layer effectively by using the enhanced ART1 network adjusting the vigilance parameter dynamically according to the similarity between patterns. The results of experiments on a large number of calling card images showed that the proposed algorithm greatly improves the character extraction and recognition compared with conventional recognition algorithms.

Feature Extraction Using Convolutional Neural Networks for Random Translation (랜덤 변환에 대한 컨볼루션 뉴럴 네트워크를 이용한 특징 추출)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.3
    • /
    • pp.515-521
    • /
    • 2020
  • Deep learning methods have been effectively used to provide great improvement in various research fields such as machine learning, image processing and computer vision. One of the most frequently used deep learning methods in image processing is the convolutional neural networks. Compared to the traditional artificial neural networks, convolutional neural networks do not use the predefined kernels, but instead they learn data specific kernels. This property makes them to be used as feature extractors as well. In this study, we compared the quality of CNN features for traditional texture feature extraction methods. Experimental results demonstrate the superiority of the CNN features. Additionally, the recognition process and result of a pioneering CNN on MNIST database are presented.

Adaptive Control of Nonlinear Systems through Improvement of Learning Speed of Neural Networks and Compensation of Control Inputs (신경망의 학습속도 개선 및 제어입력 보상을 통한 비선형 시스템의 적응제어)

  • 배병우;전기준
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.43 no.6
    • /
    • pp.991-1000
    • /
    • 1994
  • To control nonlinear systems adaptively, we improve learning speed of neural networks and present a novel control algorithm characterized by compensation of control inputs. In an error-backpropagation algorithm for tranining multilayer neural networks(MLNN's) the effect of the slope of activation functions on learning performance is investigated and the learning speed of neural networks is improved by auto-adjusting the slope of activation functions. The control system is composed of two MLNN's, one for control and the other for identification, with the weights initialized by off-line training. The control algoritm is modified by a control strategy which compensates the control error induced by the indentification error. Computer simulations show that the proposed control algorithm is efficient in controlling a nonlinear system with abruptly changing parameters.

A Survey on Neural Networks Using Memory Component (메모리 요소를 활용한 신경망 연구 동향)

  • Lee, Jihwan;Park, Jinuk;Kim, Jaehyung;Kim, Jaein;Roh, Hongchan;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.8
    • /
    • pp.307-324
    • /
    • 2018
  • Recently, recurrent neural networks have been attracting attention in solving prediction problem of sequential data through structure considering time dependency. However, as the time step of sequential data increases, the problem of the gradient vanishing is occurred. Long short-term memory models have been proposed to solve this problem, but there is a limit to storing a lot of data and preserving it for a long time. Therefore, research on memory-augmented neural network (MANN), which is a learning model using recurrent neural networks and memory elements, has been actively conducted. In this paper, we describe the structure and characteristics of MANN models that emerged as a hot topic in deep learning field and present the latest techniques and future research that utilize MANN.

A study on fatigue crack growth modelling by back propagation neural networks (역전파 신경회로망을 이용한 피로 균열성장 모델링에 관한 연구)

  • 주원식;조석수
    • Journal of Ocean Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.65-74
    • /
    • 1996
  • Up to now, the existing crack growth modelling has used a mathematical approximation but an assumed function have a great influence on this method. Especially, crack growth behavior that shows very strong nonlinearity needed complicated function which has difficulty in setting parameter of it. The main characteristics of neural network modelling to engineering field are simple calculations and absence of assumed function. In this paper, after discussing learning and generalization of neural networks, we performed crack growth modelling on the basis of above learning algorithms. J'-da/dt relation predicted by neural networks shows that test condition with unlearned data is simulated well within estimated mean error(5%).

  • PDF