• Title/Summary/Keyword: supervised competitive learning

Search Result 15, Processing Time 0.017 seconds

Supervised Competitive Learning Neural Network with Flexible Output Layer

  • Cho, Seong-won
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.7
    • /
    • pp.675-679
    • /
    • 2001
  • In this paper, we present a new competitive learning algorithm called Dynamic Competitive Learning (DCL). DCL is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. It introduces a new parameter called LOG (Limit of Grade) to decide whether an output neuron is created or not. If the class of at least one among the LOG number of nearest output neurons is the same as the class of the present training pattern, then DCL adjusts the weight vector associated with the output neuron to learn the pattern. If the classes of all the nearest output neurons are different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the selected neuron for learning is not limited only to the winner and the output neurons are dynamically generated during the learning process. In addition, the proposed algorithm has a small number of parameters, which are easy to be determined and applied to real-world problems. Experimental results for pattern recognition of remote sensing data and handwritten numeral data indicate the superiority of DCL in comparison to the conventional competitive learning methods.

  • PDF

Pattern recognition using competitive learning neural network with changeable output layer (가변 출력층 구조의 경쟁학습 신경회로망을 이용한 패턴인식)

  • 정성엽;조성원
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.2
    • /
    • pp.159-167
    • /
    • 1996
  • In this paper, a new competitive learning algorithm called dynamic competitive learning (DCL) is presented. DCL is a supervised learning mehtod that dynamically generates output neuraons and nitializes weight vectors from training patterns. It introduces a new parameter called LOG (limit of garde) to decide whether or not an output neuron is created. In other words, if there exist some neurons in the province of LOG that classify the input vector correctly, then DCL adjusts the weight vector for the neuraon which has the minimum grade. Otherwise, it produces a new output neuron using the given input vector. It is largely learning is not limited only to the winner and the output neurons are dynamically generated int he trining process. In addition, the proposed algorithm has a small number of parameters. Which are easy to be determined and applied to the real problems. Experimental results for patterns recognition of remote sensing data and handwritten numeral data indicate the superiority of dCL in comparison to the conventional competitive learning methods.

  • PDF

Object Classification Based OR LVQ With Flexible Output layer (가변적 output layer틀 이용한 LVQ 기반 물체 분류)

  • Kim, Hun-Ki;Cho, Seong-Won;Kim, Jae-Min;Lee, Jin-Hyung;Kim, Seok-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.407-408
    • /
    • 2007
  • In this paper, we present a new method for classifying object using LVQ (Learning Vector Quantization) with flexible output layer. The proposed LVQ is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. If the classes of the nearest output neuron is different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the output neurons are dynamically generated during the learning process.

  • PDF

Self-Supervised Rigid Registration for Small Images

  • Ma, Ruoxin;Zhao, Shengjie;Cheng, Samuel
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.180-194
    • /
    • 2021
  • For small image registration, feature-based approaches are likely to fail as feature detectors cannot detect enough feature points from low-resolution images. The classic FFT approach's prediction accuracy is high, but the registration time can be relatively long, about several seconds to register one image pair. To achieve real-time and high-precision rigid registration for small images, we apply deep neural networks for supervised rigid transformation prediction, which directly predicts the transformation parameters. We train deep registration models with rigidly transformed CIFAR-10 images and STL-10 images, and evaluate the generalization ability of deep registration models with transformed CIFAR-10 images, STL-10 images, and randomly generated images. Experimental results show that the deep registration models we propose can achieve comparable accuracy to the classic FFT approach for small CIFAR-10 images (32×32) and our LSTM registration model takes less than 1ms to register one pair of images. For moderate size STL-10 images (96×96), FFT significantly outperforms deep registration models in terms of accuracy but is also considerably slower. Our results suggest that deep registration models have competitive advantages over conventional approaches, at least for small images.

DR-LSTM: Dimension reduction based deep learning approach to predict stock price

  • Ah-ram Lee;Jae Youn Ahn;Ji Eun Choi;Kyongwon Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.213-234
    • /
    • 2024
  • In recent decades, increasing research attention has been directed toward predicting the price of stocks in financial markets using deep learning methods. For instance, recurrent neural network (RNN) is known to be competitive for datasets with time-series data. Long short term memory (LSTM) further improves RNN by providing an alternative approach to the gradient loss problem. LSTM has its own advantage in predictive accuracy by retaining memory for a longer time. In this paper, we combine both supervised and unsupervised dimension reduction methods with LSTM to enhance the forecasting performance and refer to this as a dimension reduction based LSTM (DR-LSTM) approach. For a supervised dimension reduction method, we use methods such as sliced inverse regression (SIR), sparse SIR, and kernel SIR. Furthermore, principal component analysis (PCA), sparse PCA, and kernel PCA are used as unsupervised dimension reduction methods. Using datasets of real stock market index (S&P 500, STOXX Europe 600, and KOSPI), we present a comparative study on predictive accuracy between six DR-LSTM methods and time series modeling.

Learning Free Energy Kernel for Image Retrieval

  • Wang, Cungang;Wang, Bin;Zheng, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.8
    • /
    • pp.2895-2912
    • /
    • 2014
  • Content-based image retrieval has been the most important technique for managing huge amount of images. The fundamental yet highly challenging problem in this field is how to measure the content-level similarity based on the low-level image features. The primary difficulties lie in the great variance within images, e.g. background, illumination, viewpoint and pose. Intuitively, an ideal similarity measure should be able to adapt the data distribution, discover and highlight the content-level information, and be robust to those variances. Motivated by these observations, we in this paper propose a probabilistic similarity learning approach. We first model the distribution of low-level image features and derive the free energy kernel (FEK), i.e., similarity measure, based on the distribution. Then, we propose a learning approach for the derived kernel, under the criterion that the kernel outputs high similarity for those images sharing the same class labels and output low similarity for those without the same label. The advantages of the proposed approach, in comparison with previous approaches, are threefold. (1) With the ability inherited from probabilistic models, the similarity measure can well adapt to data distribution. (2) Benefitting from the content-level hidden variables within the probabilistic models, the similarity measure is able to capture content-level cues. (3) It fully exploits class label in the supervised learning procedure. The proposed approach is extensively evaluated on two well-known databases. It achieves highly competitive performance on most experiments, which validates its advantages.

Adversarial learning for underground structure concrete crack detection based on semi­supervised semantic segmentation (지하구조물 콘크리트 균열 탐지를 위한 semi-supervised 의미론적 분할 기반의 적대적 학습 기법 연구)

  • Shim, Seungbo;Choi, Sang-Il;Kong, Suk-Min;Lee, Seong-Won
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.22 no.5
    • /
    • pp.515-528
    • /
    • 2020
  • Underground concrete structures are usually designed to be used for decades, but in recent years, many of them are nearing their original life expectancy. As a result, it is necessary to promptly inspect and repair the structure, since it can cause lost of fundamental functions and bring unexpected problems. Therefore, personnel-based inspections and repairs have been underway for maintenance of underground structures, but nowadays, objective inspection technologies have been actively developed through the fusion of deep learning and image process. In particular, various researches have been conducted on developing a concrete crack detection algorithm based on supervised learning. Most of these studies requires a large amount of image data, especially, label images. In order to secure those images, it takes a lot of time and labor in reality. To resolve this problem, we introduce a method to increase the accuracy of crack area detection, improved by 0.25% on average by applying adversarial learning in this paper. The adversarial learning consists of a segmentation neural network and a discriminator neural network, and it is an algorithm that improves recognition performance by generating a virtual label image in a competitive structure. In this study, an efficient deep neural network learning method was proposed using this method, and it is expected to be used for accurate crack detection in the future.

Unsupervised Learning with Natural Low-light Image Enhancement (자연스러운 저조도 영상 개선을 위한 비지도 학습)

  • Lee, Hunsang;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Recently, deep-learning based methods for low-light image enhancement accomplish great success through supervised learning. However, they still suffer from the lack of sufficient training data due to difficulty of obtaining a large amount of low-/normal-light image pairs in real environments. In this paper, we propose an unsupervised learning approach for single low-light image enhancement using the bright channel prior (BCP), which gives the constraint that the brightest pixel in a small patch is likely to be close to 1. With this prior, pseudo ground-truth is first generated to establish an unsupervised loss function. The proposed enhancement network is then trained using the proposed unsupervised loss function. To the best of our knowledge, this is the first attempt that performs a low-light image enhancement through unsupervised learning. In addition, we introduce a self-attention map for preserving image details and naturalness in the enhanced result. We validate the proposed method on various public datasets, demonstrating that our method achieves competitive performance over state-of-the-arts.

Learning Discriminative Fisher Kernel for Image Retrieval

  • Wang, Bin;Li, Xiong;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.522-538
    • /
    • 2013
  • Content based image retrieval has become an increasingly important research topic for its wide application. It is highly challenging when facing to large-scale database with large variance. The retrieval systems rely on a key component, the predefined or learned similarity measures over images. We note that, the similarity measures can be potential improved if the data distribution information is exploited using a more sophisticated way. In this paper, we propose a similarity measure learning approach for image retrieval. The similarity measure, so called Fisher kernel, is derived from the probabilistic distribution of images and is the function over observed data, hidden variable and model parameters, where the hidden variables encode high level information which are powerful in discrimination and are failed to be exploited in previous methods. We further propose a discriminative learning method for the similarity measure, i.e., encouraging the learned similarity to take a large value for a pair of images with the same label and to take a small value for a pair of images with distinct labels. The learned similarity measure, fully exploiting the data distribution, is well adapted to dataset and would improve the retrieval system. We evaluate the proposed method on Corel-1000, Corel5k, Caltech101 and MIRFlickr 25,000 databases. The results show the competitive performance of the proposed method.

Design of an Automatic constructed Fuzzy Adaptive Controller(ACFAC) for the Flexible Manipulator (유연 로봇 매니퓰레이터의 자동 구축 퍼지 적응 제어기 설계)

  • 이기성;조현철
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.106-116
    • /
    • 1998
  • A position control algorithm of a flexible manipulator is studied. The proposed algorithm is based on an ACFAC(Automatic Constructed Fuzzy Adaptive Controller) system based on the neural network learning algorithms. The proposed system learns membership functions for input variables using unsupervised competitive learning algorithm and output information using supervised outstar learning algorithm. ACFAC does not need a dynamic modeling of the flexible manipulator. An ACFAC is designed that the end point of the flexible manipulator tracks the desired trajectory. The control input to the process is determined by error, velocity and variation of error. Simulation and experiment results show a robustness of ACFAC compared with the PID control and neural network algorithms.

  • PDF