• 제목/요약/키워드: Layer-by-layer learning

검색결과 642건 처리시간 0.033초

에이전트를 이용한 자동화된 협상에서의 전략수립에 관한 연구 (The Strategy making Process For Automated Negotiation System Using Agents)

  • 전진;박세진;김성식
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2000년도 춘계정기학술대회 e-Business를 위한 지능형 정보기술 / 한국지능정보시스템학회
    • /
    • pp.207-216
    • /
    • 2000
  • Due to recent growing interest in autonomous software agents and their potential application in areas such as electronic commerce, the autonomous negotiation become more important. Evidence from both theoretical analysis and observations of human interactions suggests that if decision makers have prior information on opponents and furthermore learn the behaviors of other agents from interaction, the overall payoff would increase. We propose a new methodology for a strategy finding process using data mining in autonomous negotiation system ; ANSIA (Autonomous Negotiation System using Intelligent Agent). ANSIA is a strategy based negotiation system. The framework of ANSIA is composed of following component layers : 1) search agent layer, 2) data mining agent layer and 3) negotiation agent layer. In the data mining agent layer, that plays a key role as a system engine, extracts strategy from the historic negotiation is extracted by competitive learning in neural network. In negotiation agent layer, we propose the autonomous negotiation process model that enables to estimate the strategy of opponent and achieve interactive settlement of negotiation. ANISIA is motivated by providing a computational framework for negotiation and by defining a strategy finding model with an autonomous negotiation process.

  • PDF

Efficient weight initialization method in multi-layer perceptrons

  • Han, Jaemin;Sung, Shijoong;Hyun, Changho
    • 한국경영과학회:학술대회논문집
    • /
    • 한국경영과학회 1995년도 추계학술대회발표논문집; 서울대학교, 서울; 30 Sep. 1995
    • /
    • pp.325-333
    • /
    • 1995
  • Back-propagation is the most widely used algorithm for supervised learning in multi-layer feed-forward networks. However, back-propagation is very slow in convergence. In this paper, a new weight initialization method, called rough map initialization, in multi-layer perceptrons is proposed. To overcome the long convergence time, possibly due to the random initialization of the weights of the existing multi-layer perceptrons, the rough map initialization method initialize weights by utilizing relationship of input-output features with singular value decomposition technique. The results of this initialization procedure are compared to random initialization procedure in encoder problems and xor problems.

  • PDF

성능이 향상된 수정된 다층구조 영방향연상기억메모리 (Modified Multi-layer Bidirectional Associative Memory with High Performance)

  • 정동규;이수영
    • 전자공학회논문지B
    • /
    • 제30B권6호
    • /
    • pp.93-99
    • /
    • 1993
  • In previous paper we proposed a multi-layer bidirectional associative memory (MBAM) which is an extended model of the bidirectional associative memory (BAM) into a multilayer architecture. And we showed that the MBAM has the possibility to have binary storage for easy implementation. In this paper we present a MOdified MBAM(MOMBAM) with high performance compared to MBAM and multi-layer perceptron. The contents will include the architecture, the learning method, the computer simulation results for MOMBAM with MBAM and multi-layer perceptron, and the convergence properties shown by computer simulation examples.. And we will show that the proposed model can be used as classifier with a little restriction.

  • PDF

A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons

  • Oh, Sang-Hoon;Lee, Young-Jik
    • ETRI Journal
    • /
    • 제17권1호
    • /
    • pp.11-22
    • /
    • 1995
  • This paper proposes a modified error function to improve the error back-propagation (EBP) algorithm for multi-Layer perceptrons (MLPs) which suffers from slow learning speed. It can also suppress over-specialization for training patterns that occurs in an algorithm based on a cross-entropy cost function which markedly reduces learning time. In the similar way as the cross-entropy function, our new function accelerates the learning speed of the EBP algorithm by allowing the output node of the MLP to generate a strong error signal when the output node is far from the desired value. Moreover, it prevents the overspecialization of learning for training patterns by letting the output node, whose value is close to the desired value, generate a weak error signal. In a simulation study to classify handwritten digits in the CEDAR [1] database, the proposed method attained 100% correct classification for the training patterns after only 50 sweeps of learning, while the original EBP attained only 98.8% after 500 sweeps. Also, our method shows mean-squared error of 0.627 for the test patterns, which is superior to the error 0.667 in the cross-entropy method. These results demonstrate that our new method excels others in learning speed as well as in generalization.

  • PDF

Text-Independent Speaker Identification System Based On Vowel And Incremental Learning Neural Networks

  • Heo, Kwang-Seung;Lee, Dong-Wook;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1042-1045
    • /
    • 2003
  • In this paper, we propose the speaker identification system that uses vowel that has speaker's characteristic. System is divided to speech feature extraction part and speaker identification part. Speech feature extraction part extracts speaker's feature. Voiced speech has the characteristic that divides speakers. For vowel extraction, formants are used in voiced speech through frequency analysis. Vowel-a that different formants is extracted in text. Pitch, formant, intensity, log area ratio, LP coefficients, cepstral coefficients are used by method to draw characteristic. The cpestral coefficients that show the best performance in speaker identification among several methods are used. Speaker identification part distinguishes speaker using Neural Network. 12 order cepstral coefficients are used learning input data. Neural Network's structure is MLP and learning algorithm is BP (Backpropagation). Hidden nodes and output nodes are incremented. The nodes in the incremental learning neural network are interconnected via weighted links and each node in a layer is generally connected to each node in the succeeding layer leaving the output node to provide output for the network. Though the vowel extract and incremental learning, the proposed system uses low learning data and reduces learning time and improves identification rate.

  • PDF

NETLA를 이용한 이진 신경회로망의 최적 합성방법 (Optimal Synthesis Method for Binary Neural Network using NETLA)

  • 성상규;김태우;박두환;조현우;하홍곤;이준탁
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 하계학술대회 논문집 D
    • /
    • pp.2726-2728
    • /
    • 2001
  • This paper describes an optimal synthesis method of binary neural network(BNN) for an approximation problem of a circular region using a newly proposed learning algorithm[7] Our object is to minimize the number of connections and neurons in hidden layer by using a Newly Expanded and Truncated Learning Algorithm(NETLA) for the multilayer BNN. The synthesis method in the NETLA is based on the extension principle of Expanded and Truncated Learning(ETL) and is based on Expanded Sum of Product (ESP) as one of the boolean expression techniques. And it has an ability to optimize the given BNN in the binary space without any iterative training as the conventional Error Back Propagation(EBP) algorithm[6] If all the true and false patterns are only given, the connection weights and the threshold values can be immediately determined by an optimal synthesis method of the NETLA without any tedious learning. Futhermore, the number of the required neurons in hidden layer can be reduced and the fast learning of BNN can be realized. The superiority of this NETLA to other algorithms was proved by the approximation problem of one circular region.

  • PDF

변형된 잔차블록을 적용한 CNN (CNN Applied Modified Residual Block Structure)

  • 곽내정;신현준;양종섭;송특섭
    • 한국멀티미디어학회논문지
    • /
    • 제23권7호
    • /
    • pp.803-811
    • /
    • 2020
  • This paper proposes an image classification algorithm that transforms the number of convolution layers in the residual block of ResNet, CNN's representative method. The proposed method modified the structure of 34/50 layer of ResNet structure. First, we analyzed the performance of small and many convolution layers for the structure consisting of only shortcut and 3 × 3 convolution layers for 34 and 50 layers. And then the performance was analyzed in the case of small and many cases of convolutional layers for the bottleneck structure of 50 layers. By applying the results, the best classification method in the residual block was applied to construct a 34-layer simple structure and a 50-layer bottleneck image classification model. To evaluate the performance of the proposed image classification model, the results were analyzed by applying to the cifar10 dataset. The proposed 34-layer simple structure and 50-layer bottleneck showed improved performance over the ResNet-110 and Densnet-40 models.

퍼지논리와 다층 신경망을 이용한 로봇 매니퓰레이터의 위치제어 (Position Control of The Robot Manipulator Using Fuzzy Logic and Multi-layer Neural Network)

  • 김종수;전홍태
    • 한국지능시스템학회논문지
    • /
    • 제2권1호
    • /
    • pp.17-32
    • /
    • 1992
  • 로보트 매니퓰레이터의 신경 제어기 구성에 널리 사용하는 다층 신경회로망은 로보트의 불확실한 동적 파라메터 변화에 대한 강건한 학습 적응능력, 그리고 병렬 처리를 통한 실시간 제어등의 장점들을 갖고있다. 그러나 대표적인 학습방법인 오차 역전파(error back propagation) 알고리즘은 그 학습 속도가 느리다는 문제점을 갖는다. 본 논문에서는 불확실하고 애매한 정보를 언어적인 방법에 의해 효율적으로 처리할 수 있는 퍼지 논리 (fuzzy logic)를 도입하여 로보트 매니퓰레이터 신경 제어기의 학습 속도를 개선하기위한 한 방법을 제안한다. 제안된 제어기의 효용성은 PUMA 560 로보트의 모의 실험을 통해 입증된다.

  • PDF

공분산과 모듈로그램을 이용한 콘볼루션 신경망 기반 양서류 울음소리 구별 (Convolutional neural network based amphibian sound classification using covariance and modulogram)

  • 고경득;박상욱;고한석
    • 한국음향학회지
    • /
    • 제37권1호
    • /
    • pp.60-65
    • /
    • 2018
  • 본 논문에서는 양서류 울음소리 구별을 CNN(Convolutional Neural Network)에 적용하기 위한 방법으로 공분산 행렬과 모듈로그램(modulogram)을 제안한다. 먼저, 멸종 위기 종을 포함한 양서류 9종의 울음소리를 자연 환경에서 추출하여 데이터베이스를 구축했다. 구축된 데이터를 CNN에 적용하기 위해서는 길이가 다른 음향신호를 정형화하는 과정이 필요하다. 음향신호를 정형화하기 위해서 분포에 대한 정보를 나타내는 공분산 행렬과 시간에 대한 변화를 내포하는 모듈로그램을 추출하여, CNN의 입력으로 사용했다. CNN은 convolutional layer와 fully-connected layer의 수를 변경해 가며 실험하였다. 추가적으로, CNN의 성능을 비교하기 위해 기존에 음향 신호 분석에서 쓰이는 알고리즘과 비교해보았다. 그 결과, convolutional layer가 fully-connected layer보다 성능에 큰 영향을 끼치는 것을 확인했다. 또한 CNN을 사용하였을 때 99.07 % 인식률로, 기존에 음향분석에 쓰이는 알고리즘 보다 높은 성능을 보인 것을 확인했다.

비전공자 학부생의 훈련데이터와 기초 인공신경망 개발 결과 분석 및 Orange 활용 (Analysis and Orange Utilization of Training Data and Basic Artificial Neural Network Development Results of Non-majors)

  • 허경
    • 실천공학교육논문지
    • /
    • 제15권2호
    • /
    • pp.381-388
    • /
    • 2023
  • 스프레드시트를 활용한 인공신경망 교육을 통해, 비전공자 학부생들은 인공신경망의 동작 원리을 이해하며 자신만의 인공신경망 SW를 개발할 수 있다. 여기서, 인공신경망의 동작 원리 교육은 훈련데이터의 생성과 정답 라벨의 할당부터 시작한다. 이후, 인공 뉴런의 발화 및 활성화 함수, 입력층과 은닉층 그리고 출력층의 매개변수들로부터 계산되는 출력값을 학습한다. 마지막으로, 최초 정의된 각 훈련데이터의 정답 라벨과 인공신경망이 계산한 출력값 간 오차를 계산하는 과정을 학습하고 오차제곱의 총합을 최소화하는 입력층과 은닉층 그리고 출력층의 매개변수들이 계산되는 과정을 학습한다. 스프레드시트를 활용한 인공신경망 동작 원리 교육을 비전공자 학부생 대상으로 실시하였다. 그리고 이미지 훈련데이터와 기초 인공신경망 개발 결과를 수집하였다. 본 논문에서는 12화소 크기의 소용량 이미지로 두 가지 훈련데이터와 해당 인공신경망 SW를 수집한 결과를 분석하고, 수집한 훈련데이터를 Orange 머신러닝 모델 학습 및 분석 도구에 활용하는 방법과 실행 결과를 제시하였다.