• 제목/요약/키워드: Neural networks learning

검색결과 1,833건 처리시간 0.032초

A Learning Algorithm of Fuzzy Neural Networks with Trapezoidal Fuzzy Weights

  • Lee, Kyu-Hee;Cho, Sung-Bae
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 The Third Asian Fuzzy Systems Symposium
    • /
    • pp.404-409
    • /
    • 1998
  • In this paper, we propose a learning algorithm of fuzzy neural networks with trapezoidal fuzzy weights. This fuzzy neural networks can use fuzzy numbers as well as real numbers, and represent linguistic information better than standard neural networks. We construct trapezodal fuzzy weights by the composition of two triangles, and devise a learning algorithm using the two triangular membership functions, The results of computer simulations on numerical data show that the fuzzy neural networks have high fitting ability for target output.

  • PDF

딥 러닝 기반의 이미지 압축 알고리즘에 관한 연구 (Study on Image Compression Algorithm with Deep Learning)

  • 이용환
    • 반도체디스플레이기술학회지
    • /
    • 제21권4호
    • /
    • pp.156-162
    • /
    • 2022
  • Image compression plays an important role in encoding and improving various forms of images in the digital era. Recent researches have focused on the principle of deep learning as one of the most exciting machine learning methods to show that it is good scheme to analyze, classify and compress images. Various neural networks are able to adapt for image compressions, such as deep neural networks, artificial neural networks, recurrent neural networks and convolution neural networks. In this review paper, we discussed how to apply the rule of deep learning to obtain better image compression with high accuracy, low loss-ness and high visibility of the image. For those results in performance, deep learning methods are required on justified manner with distinct analysis.

퍼지 모델을 이용한 신경망의 학습률 조정 (Tuning Learning Rate in Neural Network Using Fuzzy Model)

  • 라혁주;서재용;김성주;전홍태
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅲ
    • /
    • pp.1239-1242
    • /
    • 2003
  • The neural networks are a famous model to learn the nonlinear function or nonlinear system. The main point of neural network is that the difference actual output from desired output is used to update weights. Usually, the gradient descent method is used for the learning process. On training process, if learning rate is too large, neural networks hardly guarantee convergence of neural networks. On the other hand, if learning rate is too small, the training spends much time. Therefore, one major problem in use of neural networks are to decrease the teaming time while neural networks are guaranteed convergence. In this paper, we suggest the model of fuzzy logic to neural networks to calibrate learning rate. This method is to tune learning rate dynamically according to error and demonstrates the optimization of training.

  • PDF

Active Random Noise Control using Adaptive Learning Rate Neural Networks

  • Sasaki, Minoru;Kuribayashi, Takumi;Ito, Satoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.941-946
    • /
    • 2005
  • In this paper an active random noise control using adaptive learning rate neural networks is presented. The adaptive learning rate strategy increases the learning rate by a small constant if the current partial derivative of the objective function with respect to the weight and the exponential average of the previous derivatives have the same sign, otherwise the learning rate is decreased by a proportion of its value. The use of an adaptive learning rate attempts to keep the learning step size as large as possible without leading to oscillation. It is expected that a cost function minimize rapidly and training time is decreased. Numerical simulations and experiments of active random noise control with the transfer function of the error path will be performed, to validate the convergence properties of the adaptive learning rate Neural Networks. Control results show that adaptive learning rate Neural Networks control structure can outperform linear controllers and conventional neural network controller for the active random noise control.

  • PDF

딥러닝의 모형과 응용사례 (Deep Learning Architectures and Applications)

  • 안성만
    • 지능정보연구
    • /
    • 제22권2호
    • /
    • pp.127-142
    • /
    • 2016
  • 딥러닝은 인공신경망(neural network)이라는 인공지능분야의 모형이 발전된 형태로서, 계층구조로 이루어진 인공신경망의 내부계층(hidden layer)이 여러 단계로 이루어진 구조이다. 딥러닝에서의 주요 모형은 합성곱신경망(convolutional neural network), 순환신경망(recurrent neural network), 그리고 심층신뢰신경망(deep belief network)의 세가지라고 할 수 있다. 그 중에서 현재 흥미로운 연구가 많이 발표되어서 관심이 집중되고 있는 모형은 지도학습(supervised learning)모형인 처음 두 개의 모형이다. 따라서 본 논문에서는 지도학습모형의 가중치를 최적화하는 기본적인 방법인 오류역전파 알고리즘을 살펴본 뒤에 합성곱신경망과 순환신경망의 구조와 응용사례 등을 살펴보고자 한다. 본문에서 다루지 않은 모형인 심층신뢰신경망은 아직까지는 합성곱신경망 이나 순환신경망보다는 상대적으로 주목을 덜 받고 있다. 그러나 심층신뢰신경망은 CNN이나 RNN과는 달리 비지도학습(unsupervised learning)모형이며, 사람이나 동물은 관찰을 통해서 스스로 학습한다는 점에서 궁극적으로는 비지도학습모형이 더 많이 연구되어야 할 주제가 될 것이다.

자기조직화 교사 학습에 의한 패턴인식에 관한 연구 (A Study on Pattern Recognition with Self-Organized Supervised Learning)

  • 박찬호
    • 정보학연구
    • /
    • 제5권2호
    • /
    • pp.17-26
    • /
    • 2002
  • 본 연구에서는 자기조직화 교사학습 신경망인 SOSL(Self-Organized Superised Learning)과 이 신경망의 구조를 제안한다. SOSL신경망은 하이브리드 형태의 신경망으로써 다수 개의 컴포넌트 에러 역전파 신경망들과 수정된 PCA신경망으로 구성된다. CBP신경망은 군집화되고 복잡한 입력패턴에 대하여 교사학습을 병렬적으로 수행한다. 수정된 PCA신경망은 군집화 및 지역투영에 의하여 원 입력패턴을 보다 작은 차원으로 변환시키기 위하여 사용된다. 제안된 SOSL은 많은 입력패턴을 가짐으로써 큰 네트워크 크기를 가지게 되는 신경망에 효과적으로 적용이 가능하다.

  • PDF

퍼지 결합 다항식 뉴럴 네트워크 (Fuzzy Combined Polynomial Neural Networks)

  • 노석범;오성권;안태천
    • 전기학회논문지
    • /
    • 제56권7호
    • /
    • pp.1315-1320
    • /
    • 2007
  • In this paper, we introduce a new fuzzy model called fuzzy combined polynomial neural networks, which are based on the representative fuzzy model named polynomial fuzzy model. In the design procedure of the proposed fuzzy model, the coefficients on consequent parts are estimated by using not general least square estimation algorithm that is a sort of global learning algorithm but weighted least square estimation algorithm, a sort of local learning algorithm. We are able to adopt various type of structures as the consequent part of fuzzy model when using a local learning algorithm. Among various structures, we select Polynomial Neural Networks which have nonlinear characteristic and the final result of which is a complex mathematical polynomial. The approximation ability of the proposed model can be improved using Polynomial Neural Networks as the consequent part.

Spiking Neural Networks(SNN) 구조에서 뉴런의 개수와 학습량에 따른 학습 성능 변화 분석 (An analysis of learning performance changes in spiking neural networks(SNN))

  • 김용주;김태호
    • 문화기술의 융합
    • /
    • 제6권3호
    • /
    • pp.463-468
    • /
    • 2020
  • 인공지능 연구는 다양한 분야에 적용되며 발전하고 있다. 본 논문에서는 차세대 인공지능 연구 분야인 SNN(Spiking Neural Networks) 형태의 인공지능 구현 방식을 사용하여 신경망을 구축하고, 그 신경망에서 뉴런의 개수가 신경망의 성능에 어떠한 영향을 미치는지를 분석한다. 또한 신경망 학습량을 증가시키면서 신경망의 성능이 어떻게 바뀌는지를 분석한다. 해당 연구 결과를 통해 각 분야에서 사용되는 SNN 기반의 신경망을 최적화 할 수 있을 것이다.

신경회로망을 이용한 도립전자의 학습제어 (Learning Control of Inverted Pendulum Using Neural Networks)

  • 이재강;김일환
    • 산업기술연구
    • /
    • 제24권A호
    • /
    • pp.99-107
    • /
    • 2004
  • This paper considers reinforcement learning control with the self-organizing map. Reinforcement learning uses the observable states of objective system and signals from interaction of the system and the environments as input data. For fast learning in neural network training, it is necessary to reduce learning data. In this paper, we use the self-organizing map to parition the observable states. Partitioning states reduces the number of learning data which is used for training neural networks. And neural dynamic programming design method is used for the controller. For evaluating the designed reinforcement learning controller, an inverted pendulum of the cart system is simulated. The designed controller is composed of serial connection of self-organizing map and two Multi-layer Feed-Forward Neural Networks.

  • PDF

A Learning Algorithm of Fuzzy Neural Networks Using a Shape Preserving Operation

  • Lee, Jun-Jae;Hong, Dug-Hun;Hwang, Seok-Yoon
    • Journal of Electrical Engineering and information Science
    • /
    • 제3권2호
    • /
    • pp.131-138
    • /
    • 1998
  • We derive a back-propagation learning algorithm of fuzzy neural networks using fuzzy operations, which preserves the shapes of fuzzy numbers, in order to utilize fuzzy if-then rules as well as numerical data in the learning of neural networks for classification problems and for fuzzy control problems. By introducing the shape preseving fuzzy operation into a neural network, the proposed network simplifies fuzzy arithmetic operations of fuzzy numbers with exact result in learning the network. And we illustrate our approach by computer simulations on numerical examples.

  • PDF