• 제목/요약/키워드: Competitive learning

검색결과 376건 처리시간 0.022초

Supervised Competitive Learning Neural Network with Flexible Output Layer

  • Cho, Seong-won
    • 한국지능시스템학회논문지
    • /
    • 제11권7호
    • /
    • pp.675-679
    • /
    • 2001
  • In this paper, we present a new competitive learning algorithm called Dynamic Competitive Learning (DCL). DCL is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. It introduces a new parameter called LOG (Limit of Grade) to decide whether an output neuron is created or not. If the class of at least one among the LOG number of nearest output neurons is the same as the class of the present training pattern, then DCL adjusts the weight vector associated with the output neuron to learn the pattern. If the classes of all the nearest output neurons are different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the selected neuron for learning is not limited only to the winner and the output neurons are dynamically generated during the learning process. In addition, the proposed algorithm has a small number of parameters, which are easy to be determined and applied to real-world problems. Experimental results for pattern recognition of remote sensing data and handwritten numeral data indicate the superiority of DCL in comparison to the conventional competitive learning methods.

  • PDF

동적 경쟁학습을 수행하는 병렬 신경망 (Parallel neural netowrks with dynamic competitive learning)

  • 김종완
    • 전자공학회논문지B
    • /
    • 제33B권3호
    • /
    • pp.169-175
    • /
    • 1996
  • In this paper, a new parallel neural network system that performs dynamic competitive learning is proposed. Conventional learning mehtods utilize the full dimension of the original input patterns. However, a particular attribute or dimension of the input patterns does not necessarily contribute to classification. The proposed system consists of parallel neural networks with the reduced input dimension in order to take advantage of the information in each dimension of the input patterns. Consensus schemes were developed to decide the netowrks performs a competitive learning that dynamically generates output neurons as learning proceeds. Each output neuron has it sown class threshold in the proposed dynamic competitive learning. Because the class threshold in the proposed dynamic learning phase, the proposed neural netowrk adapts properly to the input patterns distribution. Experimental results with remote sensing and speech data indicate the improved performance of the proposed method compared to the conventional learning methods.

  • PDF

개방형 혁신과 조직학습 특성이 벤처기업의 기술경쟁우위에 미치는 영향 (The Effect of Open Innovation and Organizational Learning on Technological Competitive Advantage in Venture Business)

  • 서리빈;윤현덕
    • 지식경영연구
    • /
    • 제13권2호
    • /
    • pp.73-93
    • /
    • 2012
  • Although a wide range of theoretical researches have emphasized on the importance of knowledge management in cooperative R&D network, the empirical researches to synthetically examine the role of organizational learning and open innovation which influence on the performance of technological innovation are not enough to meet academic and practical demands. This study is to investigate the effect of open innovation and organizational learning in venture business on technological competitive advantage and establish the mediating role of organizational learning. For the purpose, the questionnaires, made based on the reviewing previous researches, were collected from 274 Korean venture businesses whose managerial focus is on developing technological innovation. As a result of analysis, the relational dimensions of open innovation - network, intensity and trust shared by a firm with external R&D partners - as well as the internal organizational learning system and competence have positive influence on building technological competitive advantage whose sub-variables are technological excellence, market growth potential and business feasibility. In addition, it is identified that organizational learning has the mediating and moderating effect in the relationship between open innovation and technological competitive advantage. These results imply that open innovation complements and expend the range of limited resources and the scope of innovation in technology-intensive small and medium-sized enterprises. Besides, organizational learning activity reinforces the use of knowledge and resources, obtained from external R&D partners. On the basis of these results, detailed issues and discussion were made in the conclusion.

  • PDF

가변 출력층 구조의 경쟁학습 신경회로망을 이용한 패턴인식 (Pattern recognition using competitive learning neural network with changeable output layer)

  • 정성엽;조성원
    • 전자공학회논문지B
    • /
    • 제33B권2호
    • /
    • pp.159-167
    • /
    • 1996
  • In this paper, a new competitive learning algorithm called dynamic competitive learning (DCL) is presented. DCL is a supervised learning mehtod that dynamically generates output neuraons and nitializes weight vectors from training patterns. It introduces a new parameter called LOG (limit of garde) to decide whether or not an output neuron is created. In other words, if there exist some neurons in the province of LOG that classify the input vector correctly, then DCL adjusts the weight vector for the neuraon which has the minimum grade. Otherwise, it produces a new output neuron using the given input vector. It is largely learning is not limited only to the winner and the output neurons are dynamically generated int he trining process. In addition, the proposed algorithm has a small number of parameters. Which are easy to be determined and applied to the real problems. Experimental results for patterns recognition of remote sensing data and handwritten numeral data indicate the superiority of dCL in comparison to the conventional competitive learning methods.

  • PDF

A Study on the Effect of Pair Check Cooperative Learning in Operating System Class

  • Shin, Woochang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제12권1호
    • /
    • pp.104-110
    • /
    • 2020
  • In the 4th Industrial Revolution, the competitiveness of the software industry is important, and as a solution to fundamentally secure the competitiveness of the software industry, education classes should be provided to educate high quality software personnel in educational institutions. Despite this social situation, software-related classes in universities are largely composed of competitive or individual learning structures. Cooperative learning is a learning model that can complement the problems of competitive and individual learning. Cooperative learning is more effective in improving academic achievement than individual or competitive learning. In addition, most learners have the advantage of having a more desirable self-image by having a successful experience. In this paper, we apply a pair check model, which is a type of cooperative learning, in operating system classes. In addition, the class procedure and instruction plan are designed to apply the pair check model. We analyze the test results to analyze the performance of the cooperative learning model.

Fokker-plank 방정식의 해석을 통한 Langevine 경쟁학습의 동역학 분석 (Analysis of the fokker-plank equation for the dynamics of langevine cometitive learning neural network)

  • 석진욱;조성원
    • 전자공학회논문지C
    • /
    • 제34C권7호
    • /
    • pp.82-91
    • /
    • 1997
  • In this paper, we analyze the dynamics of langevine competitive learning neural network based on its fokker-plank equation. From the viewpont of the stochastic differential equation (SDE), langevine competitive learning equation is one of langevine stochastic differential equation and has the diffusin equation on the topological space (.ohm., F, P) with probability measure. We derive the fokker-plank equation from the proposed algorithm and prove by introducing a infinitestimal operator for markov semigroups, that the weight vector in the particular simplex can converge to the globally optimal point under the condition of some convex or pseudo-convex performance measure function. Experimental resutls for pattern recognition of the remote sensing data indicate the superiority of langevine competitive learning neural network in comparison to the conventional competitive learning neural network.

  • PDF

축합조건의 분석을 통한 Langevine 경쟁 학습 신경회로망의 대역 최소화 근사 해석과 필기체 숫자 인식에 관한 연구 (A study of global minimization analaysis of Langevine competitive learning neural network based on constraction condition and its application to recognition for the handwritten numeral)

  • 석진욱;조성원;최경삼
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.466-469
    • /
    • 1996
  • In this paper, we present the global minimization condition by an informal analysis of the Langevine competitive learning neural network. From the viewpoint of the stochastic process, it is important that competitive learning guarantees an optimal solution for pattern recognition. By analysis of the Fokker-Plank equation for the proposed neural network, we show that if an energy function has a special pseudo-convexity, Langevine competitive learning can find the global minima. Experimental results for pattern recognition of handwritten numeral data indicate the superiority of the proposed algorithm.

  • PDF

Langevine 경쟁학습 신경회로망의 확산성과 대역 최적화 성질의 근사 해석 (An Informal Analysis of Diffusion, Global Optimization Properties in Langevine Competitive Learning Neural Network)

  • 석진욱;조성원;최경삼
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1996년도 하계학술대회 논문집 B
    • /
    • pp.1344-1346
    • /
    • 1996
  • In this paper, we discuss an informal analysis of diffusion, global optimization properties of Langevine competitive learning neural network. In the view of the stochastic process, it is important that competitive learning gurantee an optimal solution for pattern recognition. We show that the binary reinforcement function in Langevine competitive learning is a brownian motion as Gaussian process, and construct the Fokker-Plank equation for the proposed neural network. Finally, we show that the informal analysis of the proposed algorithm has a possiblity of globally optimal. solution with the proper initial condition.

  • PDF

동적으로 출력 뉴런을 생성하는 경쟁 학습 신경회로망 (Competitive Learning Neural Network with Dynamic Output Neuron Generation)

  • 김종완;안제성;김종상;이흥호;조성원
    • 전자공학회논문지B
    • /
    • 제31B권9호
    • /
    • pp.133-141
    • /
    • 1994
  • Conventional competitive learning algorithms compute the Euclidien distance to determine the winner neuron out of all predetermined output neurons. In such cases, there is a drawback that the performence of the learning algorithm depends on the initial reference(=weight) vectors. In this paper, we propose a new competitive learning algorithm that dynamically generates output neurons. The proposed method generates output neurons by dynamically changing the class thresholds for all output neurons. We compute the similarity between the input vector and the reference vector of each output neuron generated. If the two are similar, the reference vector is adjusted to make it still more like the input vector. Otherwise, the input vector is designated as the reference vector of a new outputneuron. Since the reference vectors of output neurons are dynamically assigned according to input pattern distribution, the proposed method gets around the phenomenon that learning is early determined due to redundant output neurons. Experiments using speech data have shown the proposed method to be superior to existint methods.

  • PDF

분류된 학습률을 가진 고속 경쟁 학습 (Fast Competitive Learning with Classified Learning Rates)

  • 김창욱;조성원;이충웅
    • 전자공학회논문지B
    • /
    • 제31B권11호
    • /
    • pp.142-150
    • /
    • 1994
  • 본 논문은 분류된 학습률을 이용한 고속 경쟁 학습에 대한 연구이다. 이연구의 기본 개념은 각 출력 노우드의 연결강도 벡터에 분류된 학습률을 할당하는 것이다. 출력 노우드의 각 연결강도 벡터는 자기 자신의 학습률에 의하여 갱신된다. 각 학습률은 관련되는 출력 노우드가 경쟁에서 승리할 때에만 변화되며, 승리하지 못한 노우드들의 학습률은 변화되지 않는다. 영상 벡터 양자화에 대하여 실험한 결과는 제안한 방법이 기존 경쟁 학습 방법에 비하여 더 빠르게 학습되고 더 좋은 화질을 갖게 됨을 보였다.

  • PDF