• Title/Summary/Keyword: 퍼셉트론

Search Result 384, Processing Time 0.036 seconds

Fuzzy Single Layer Perceptron using Dynamic Adjustment of Threshold (동적 역치 조정을 이용한 퍼지 단층 퍼셉트론)

  • Cho Jae-Hyun;Kim Kwang-Baek
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.5 s.37
    • /
    • pp.11-16
    • /
    • 2005
  • Recently, there are a lot of endeavor to implement a fuzzy theory to artificial neural network. Goh proposed the fuzzy single layer perceptron algorithm and advanced fuzzy perceptron based on the generalized delta rule to solve the XOR Problem and the classical Problem. However, it causes an increased amount of computation and some difficulties in application of the complicated image recognition. In this paper, we propose an enhanced fuzzy single layer Perceptron using the dynamic adjustment of threshold. This method is applied to the XOR problem, which used as the benchmark in the field of pattern recognition. The method is also applied to the recognition of digital image for image application. In a result of experiment, it does not always guarantee the convergence. However, the network show improved the learning time and has the high convergence rate.

  • PDF

A Possibilistic Based Perceptron Algorithm for Finding Linear Decision Boundaries (선형분류 경계면을 찾기 위한 Possibilistic 퍼셉트론 알고리즘)

  • Kim, Mi-Kyung;Rhee, Frank Chung-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.14-18
    • /
    • 2002
  • The perceptron algorithm, which is one of a class of gradient descent techniques, has been widely used in pattern recognition to determine linear decision boundaries. However, it may not give desirable results when pattern sets are nonlinerly separable. A fuzzy version was developed to male up for the weaknesses in the crisp perceptron algorithm. This was achieved by assigning memberships to the pattern sets. However, still another drawback exists in that the pattern memberships do not consider class typicality of the patterns. Therefore, we propose a possibilistic approach to the crisp perceptron algorithm. This algorithm combines the linearly separable property of the crisp version and the convergence property of the fuzzy version. Several examples are given to show the validity of the method.

(Efficient Methods for Combining User and Article Models for Collaborative Recommendation) (협력적 추천을 위한 사용자와 항목 모델의 효율적인 통합 방법)

  • 도영아;김종수;류정우;김명원
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.540-549
    • /
    • 2003
  • In collaborative recommendation two models are generally used: the user model and the article model. A user model learns correlation between users preferences and recommends an article based on other users preferences for the article. Similarly, an article model learns correlation between preferences for articles and recommends an article based on the target user's preference for other articles. In this paper, we investigates various combination methods of the user model and the article model for better recommendation performance. They include simple sequential and parallel methods, perceptron, multi-layer perceptron, fuzzy rules, and BKS. We adopt the multi-layer perceptron for training each of the user and article models. The multi-layer perceptron has several advantages over other methods such as the nearest neighbor method and the association rule method. It can learn weights between correlated items and it can handle easily both of symbolic and numeric data. The combined models outperform any of the basic models and our experiments show that the multi-layer perceptron is the most efficient combination method among them.

Bayesian Texture Segmentation Using Multi-layer Perceptron and Markov Random Field Model (다층 퍼셉트론과 마코프 랜덤 필드 모델을 이용한 베이지안 결 분할)

  • Kim, Tae-Hyung;Eom, Il-Kyu;Kim, Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.40-48
    • /
    • 2007
  • This paper presents a novel texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields in multiscale Bayesian framework. Multiscale wavelet coefficients are used as input for the neural networks. The output of the neural network is modeled as a posterior probability. Texture classification at each scale is performed by the posterior probabilities from MLP networks and MAP (maximum a posterior) classification. Then, in order to obtain the more improved segmentation result at the finest scale, our proposed method fuses the multiscale MAP classifications sequentially from coarse to fine scales. This process is done by computing the MAP classification given the classification at one scale and a priori knowledge regarding contextual information which is extracted from the adjacent coarser scale classification. In this fusion process, the MRF (Markov random field) prior distribution and Gibbs sampler are used, where the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. The proposed segmentation method shows better performance than texture segmentation using the HMT (Hidden Markov trees) model and HMTseg.

Parity Discrimination by Perceptron Neural Network (퍼셉트론형 신경회로망에 의한 패리티판별)

  • Choi, Jae-Seung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.3
    • /
    • pp.565-571
    • /
    • 2010
  • This paper proposes a parity discrimination algorithm which discriminates N bit parity using a perceptron neural network and back propagation algorithm. This algorithm decides minimum hidden unit numbers when discriminates N bit parity. Therefore, this paper implements parity discrimination experiments for N bit by changing hidden unit numbers of the proposed perceptron neural network. Experiments confirm that the proposed algorithm is possible to discriminates N bit parity.

Training of Locally Linear Embedding using Multilayer Perceptrons (LLE(Locally Linear Embedding)의 함수관계에 대한 다층퍼셉트론 학습)

  • Oh, Sang-Hoon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.217-220
    • /
    • 2007
  • LLE(Locally Linear Embedding) has been proposed to compute low dimensional, neighborhood preserving embeddings of high dimensional data. Here, we should perform whole processes of LLE when untrained patterns are presented. In this paper, we propose a training of MLPs(Multilayer Perceptrons) to perform the mapping of LLE from high dimensional data to low dimensional ones.

  • PDF

Nonlinear Approximations Using Modified Mixture Density Networks (변형된 혼합 밀도 네트워크를 이용한 비선형 근사)

  • 조원희;박주영
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.543-546
    • /
    • 2004
  • Bishop과 Nabney에 의해 소개된 기존의 혼합 밀도 네트워크(Mixture Density Network)에서는 조건부 확률밀도 함수의 매개변수들(parameters)이 하나의 MLP(multi-layer perceptron)의 출력 벡터로 주어진다. 최근에는 변형된 혼합 밀도 네트워크(Modified Mixture Density Network)라고 하는 이름으로 조건부 확률밀도 함수의 선분포(priors), 조건부 평균(conditional means), 그리고 공분산(covariances) 등이 각각 독립적인 MLP의 출력벡터로 주어지는 경우를 다룬 연구가 보고된 바 있다. 본 논문에서는 조건부 평균이 입력에 관해 선형인 경우를 위한 버전에 대한 이론과 매트랩 프로그램 개발 및 적용을 다룬다. 본 논문에서는 우선 일반적인 혼합 밀도 네트워크에 대해 간단히 설명하고, 혼합 밀도 네트워크의 출력인 다층 퍼셉트론의 매개변수를 각각 다른 다층 퍼셉트론에서 학습시키는 변형된 혼합 밀도 네트워크를 설명한 후, 각각 다른 다층 퍼셉트론을 통해 매개변수를 얻는 것은 동일하나 평균값은 선형함수를 통해 얻는 혼합 밀도 네트워크 버전을 소개한다. 그리고, 모의실험을 통하여 이러한 혼합 밀도 네트워크를의 적용가능성에 대해 알아본다.

  • PDF

Study on the Alternative of Thiessen Coefficient by One Perceptron Neuron (단층퍼셉트론을 이용한 Thiessen 계수 대안에 관한 연구)

  • Park, Sung-Chun;Kim, Yong-Gu;Jeong, Cheon-lee;Moon, Byoung-seok
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2004.05b
    • /
    • pp.859-862
    • /
    • 2004
  • 유역평균강우량은 강우-유출모형을 통하여 유출량을 산정할 경우에 사용되며, 산정 방법에는 산술평균법, Thiessen 가중법, 등우선법 등이 있으나 일반적으로 Thiessen 가중법을 많이 적용하고 있다. Thiessen 가중법의 유역평균 강우량 산정방법은 각 관측소가 지배하는 면적(지배면적)을 전체면적으로 나누어 가중치(Thiessen계수)를 구한 후 여기에 각 관측소의 강우량을 곱하고 이를 합산함으로써 유역평균 강우량을 산정하는 방법이다. 본 연구에서는 면적비로 구해지는 Thiessen 계수의 대안을 찾기 위해 대상 유역으로는 영산강 1지류인 지석천 유역을 선정하였고, 단층퍼셉트론을 이용하여 동면, 청풍, 능주의 강우자료를 Input, 능주지점의 유출자료를 Output으로 상호 상관분석으로부터 한 개의 유출 사상에 대해 가장 높은 상관계수를 선택하여 Input 자료를 재구성하였다. 재구성 한 자료를 이용하여 훈련시키고 여기서 발생한 가중치를 Thiessen 계수의 대안의 값으로 추천한다.

  • PDF

An Information Geometrical Approach on Plateau Problems in Multilayer Perceptron Learning (다층 퍼셉트론 학습의 플라토 문제에 대한 정보기하 이론적 접근)

  • Park, Hye-Yeong;;Lee, Il-Byeong
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.4
    • /
    • pp.546-556
    • /
    • 1999
  • 다층 퍼셉트론은 다양한 응용 분야에 성공적으로 적용되고 있는 대표적인 신경회로망 모델이다. 그러나 다층 퍼셉트론의 학습에 사용되는 오류역전파 알고리즘으로 알려진 기울기 강하 학습법은 느린 수렴속도로 인해 실시간 처리가 요구되거나 시간에 따라 환경이 변하는 문제에의 적용이 불가능하다. 이러한 느린 수렴속도는 기울기 강하법을 사용한 학습과정에서의 오차함수의 기울기 변화가 극히 적어 오차의 감소가 거의 일어나지 않는 부분인 플라토에 기인하는 것으로 알려져있다. 본 논문에서는 정보기하이론의 관점에서 기존의 학습법에 사용되는 기울기의 이론적 문제를 지적하고, 그로부터 플라토 문제의 원인을 밝힌다. 또한 이를 바탕으로 정보기하이론에 의해 새롭게 정의되는 자연 기울기를 이용한 학습법을 제시하고, 이를 이용한 플라토 문제가 문제해결의 가능성을 분석적으로 고찰하고 실험을 통해 확인한다.

Optimal Learning Rates in Gradient Descent Training of Multilayer Perceptrons (다층퍼셉트론의 강하 학습을 위한 최적 학습률)

  • 오상훈
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.3
    • /
    • pp.99-105
    • /
    • 2004
  • This paper proposes optimal learning rates in the gradient descent training of multilayer perceptrons, which are a separate learning rate for weights associated with each neuron and a separate one for assigning virtual hidden targets associated with each training pattern Effectiveness of the proposed error function was demonstrated for a handwritten digit recognition and an isolated-word recognition tasks and very fast learning convergence was obtained.

  • PDF