• 제목/요약/키워드: perceptron learning

검색결과 346건 처리시간 0.021초

성능이 향상된 수정된 다층구조 영방향연상기억메모리 (Modified Multi-layer Bidirectional Associative Memory with High Performance)

  • 정동규;이수영
    • 전자공학회논문지B
    • /
    • 제30B권6호
    • /
    • pp.93-99
    • /
    • 1993
  • In previous paper we proposed a multi-layer bidirectional associative memory (MBAM) which is an extended model of the bidirectional associative memory (BAM) into a multilayer architecture. And we showed that the MBAM has the possibility to have binary storage for easy implementation. In this paper we present a MOdified MBAM(MOMBAM) with high performance compared to MBAM and multi-layer perceptron. The contents will include the architecture, the learning method, the computer simulation results for MOMBAM with MBAM and multi-layer perceptron, and the convergence properties shown by computer simulation examples.. And we will show that the proposed model can be used as classifier with a little restriction.

  • PDF

Evaluation of Predictive Models for Early Identification of Dropout Students

  • Lee, JongHyuk;Kim, Mihye;Kim, Daehak;Gil, Joon-Min
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.630-644
    • /
    • 2021
  • Educational data analysis is attracting increasing attention with the rise of the big data industry. The amounts and types of learning data available are increasing steadily, and the information technology required to analyze these data continues to develop. The early identification of potential dropout students is very important; education is important in terms of social movement and social achievement. Here, we analyze educational data and generate predictive models for student dropout using logistic regression, a decision tree, a naïve Bayes method, and a multilayer perceptron. The multilayer perceptron model using independent variables selected via the variance analysis showed better performance than the other models. In addition, we experimentally found that not only grades but also extracurricular activities were important in terms of preventing student dropout.

변형된 혼합 밀도 네트워크를 이용한 비선형 근사 (Nonlinear Approximations Using Modified Mixture Density Networks)

  • 조원희;박주영
    • 한국지능시스템학회논문지
    • /
    • 제14권7호
    • /
    • pp.847-851
    • /
    • 2004
  • Bishop과 Nabnck에 의해 소개된 기존치 혼합 밀도 네트워크(Mixture Density Network)에서는 조건부 확률밀도 함수의 매개변수들(parameters)이 하나의 MLP(multi-layer perceptron)의 출력 벡터로 주어진다. 최근에는 변형된 혼합 밀도 네트워크(Modified Mixture Density Network)라고 하는 이름으로 조건부 확률밀도 함수의 선분포(priors), 조건부 평균(conditional means), 그리고 공분산(covariances) 등이 각각 독립적인 MLP의 출력벡터로 주어지는 경우를 다룬 연구가 보고된 바 있다. 본 논문에서는 조건부 평균이 입력에 관해 선형인 경우를 위한 버전에 대한 이론과 매트랩 프로그램 개발을 다룬다. 본 논문에서는 우선 일반적인 혼합 밀도 네트워크에 대해 간단히 설명하고, 혼합 밀도 네트워크의 출력인 다층 퍼셉트론의 매개변수를 각각 다른 다층 퍼셉트론에서 학습시키는 변형된 혼합 밀도 네트워크를 설명한 후, 각각 다른 다층 퍼셉트론을 통해 매개변수를 얻는 것은 동일하나 평균값은 선형함수를 통해 얻는 혼합 밀도 네트워크 버전을 소개한다. 그리고, 모의실험을 통하여 이러한 혼합 밀도 네트워크의 적용가능성에 대해 알아본다.

A New Fuzzy Supervised Learning Algorithm

  • Kim, Kwang-Baek;Yuk, Chang-Keun;Cha, Eui-Young
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 The Third Asian Fuzzy Systems Symposium
    • /
    • pp.399-403
    • /
    • 1998
  • In this paper, we proposed a new fuzzy supervised learning algorithm. We construct, and train, a new type fuzzy neural net to model the linear activation function. Properties of our fuzzy neural net include : (1) a proposed linear activation function ; and (2) a modified delta rule for learning algorithm. We applied this proposed learning algorithm to exclusive OR,3 bit parity using benchmark in neural network and pattern recognition problems, a kind of image recognition.

  • PDF

새로운 Preceding Layer Driven MLP 신경회로망의 학습 모델과 그 응용 (Learning Model and Application of New Preceding Layer Driven MLP Neural Network)

  • 한효진;김동훈;정호선
    • 전자공학회논문지B
    • /
    • 제28B권12호
    • /
    • pp.27-37
    • /
    • 1991
  • In this paper, the novel PLD (Preceding Layer Driven) MLP (Multi Layer Perceptron) neural network model and its learning algorithm is described. This learning algorithm is different from the conventional. This integer weights and hard limit function are used for synaptic weight values and activation function, respectively. The entire learning process is performed by layer-by-layer method. the number of layers can be varied with difficulty of training data. Since the synaptic weight values are integers, the synapse circuit can be easily implemented with CMOS. PLD MLP neural network was applied to English Characters, arbitrary waveform generation and spiral problem.

  • PDF

Enhanced Fuzzy Single Layer Perceptron

  • Chae, Gyoo-Yong;Eom, Sang-Hee;Kim, Kwang-Baek
    • Journal of information and communication convergence engineering
    • /
    • 제2권1호
    • /
    • pp.36-39
    • /
    • 2004
  • In this paper, a method of improving the learning speed and convergence rate is proposed to exploit the advantages of artificial neural networks and neuro-fuzzy systems. This method is applied to the XOR problem, n bit parity problem, which is used as the benchmark in the field of pattern recognition. The method is also applied to the recognition of digital image for practical image application. As a result of experiment, it does not always guarantee convergence. However, the network showed considerable improvement in learning time and has a high convergence rate. The proposed network can be extended to any number of layers. When we consider only the case of the single layer, the networks had the capability of high speed during the learning process and rapid processing on huge images.

작성자 언어적 특성 기반 가짜 리뷰 탐지 딥러닝 모델 개발 (Development of a Deep Learning Model for Detecting Fake Reviews Using Author Linguistic Features)

  • 신동훈;신우식;김희웅
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제31권4호
    • /
    • pp.01-23
    • /
    • 2022
  • Purpose This study aims to propose a deep learning-based fake review detection model by combining authors' linguistic features and semantic information of reviews. Design/methodology/approach This study used 358,071 review data of Yelp to develop fake review detection model. We employed linguistic inquiry and word count (LIWC) to extract 24 linguistic features of authors. Then we used deep learning architectures such as multilayer perceptron(MLP), long short-term memory(LSTM) and transformer to learn linguistic features and semantic features for fake review detection. Findings The results of our study show that detection models using both linguistic and semantic features outperformed other models using single type of features. In addition, this study confirmed that differences in linguistic features between fake reviewer and authentic reviewer are significant. That is, we found that linguistic features complement semantic information of reviews and further enhance predictive power of fake detection model.

신뢰도 추정을 위한 분산 학습 신경 회로망 (A Variance Learning Neural Network for Confidence Estimation)

  • 조영빈;권대갑
    • 한국정밀공학회지
    • /
    • 제14권6호
    • /
    • pp.121-127
    • /
    • 1997
  • Multilayer feedforward networks may be applied to identify the deterministic relationship between input and output data. When the results from the network require a high level of assurance, consideration of the stochastic relationship between the input and output data may be very important. Variance is one of the effective parameters to deal with the stochastic relationship. This paper presents a new algroithm for a multilayer feedforward network to learn the variance of dispersed data without preliminary calculation of variance. In this paper, the network with this learning algorithm is named as a variance learning neural network(VALEAN). Computer simulation examples are utilized for the demonstration and the evaluation of VALEAN.

  • PDF

지하수위 예측을 위한 경사하강법과 화음탐색법의 결합을 이용한 다층퍼셉트론 성능향상 (Improvement of multi layer perceptron performance using combination of gradient descent and harmony search for prediction of ground water level)

  • 이원진;이의훈
    • 한국수자원학회논문집
    • /
    • 제55권11호
    • /
    • pp.903-911
    • /
    • 2022
  • 물을 공급하기 위한 자원 중 하나인 지하수는 다양한 자연적 요인에 의해 수위의 변동이 발생한다. 최근, 인공신경망을 이용하여 지하수위의 변동을 예측하는 연구가 진행되었다. 기존에는 인공신경망 연산자 중 학습에 영향을 미치는 Optimizer로 경사하강법(Gradient Descent, GD) 기반 Optimizer를 사용하였다. GD 기반 Optimizer는 초기 상관관계 의존성과 해의 비교 및 저장 구조 부재의 단점이 존재한다. 본 연구는 GD 기반 Optimizer의 단점을 개선하기 위해 GD와 화음탐색법(Harmony Search, HS)를 결합한 새로운 Optimizer인 Gradient Descent combined with Harmony Search(GDHS)를 개발하였다. GDHS의 성능을 평가하기 위해 다층퍼셉트론(Multi Layer Perceptron, MLP)을 이용하여 이천율현 관측소의 지하수위를 학습 및 예측하였다. GD 및 GDHS를 사용한 MLP의 성능을 비교하기 위해 Mean Squared Error(MSE) 및 Mean Absolute Error(MAE)를 사용하였다. 학습결과를 비교하면, GDHS는 GD보다 MSE의 최대값, 최소값, 평균값 및 표준편차가 작았다. 예측결과를 비교하면, GDHS는 GD보다 모든 평가지표에서 오차가 작은 것으로 평가되었다.

머신러닝을 이용한 이러닝 학습자 집중도 평가 연구 (A Study on Evaluation of e-learners' Concentration by using Machine Learning)

  • 정영상;주민성;조남욱
    • 디지털산업정보학회논문지
    • /
    • 제18권4호
    • /
    • pp.67-75
    • /
    • 2022
  • Recently, e-learning has been attracting significant attention due to COVID-19. However, while e-learning has many advantages, it has disadvantages as well. One of the main disadvantages of e-learning is that it is difficult for teachers to continuously and systematically monitor learners. Although services such as personalized e-learning are provided to compensate for the shortcoming, systematic monitoring of learners' concentration is insufficient. This study suggests a method to evaluate the learner's concentration by applying machine learning techniques. In this study, emotion and gaze data were extracted from 184 videos of 92 participants. First, the learners' concentration was labeled by experts. Then, statistical-based status indicators were preprocessed from the data. Random Forests (RF), Support Vector Machines (SVMs), Multilayer Perceptron (MLP), and an ensemble model have been used in the experiment. Long Short-Term Memory (LSTM) has also been used for comparison. As a result, it was possible to predict e-learners' concentration with an accuracy of 90.54%. This study is expected to improve learners' immersion by providing a customized educational curriculum according to the learner's concentration level.