• Title/Summary/Keyword: Fuzzy k-Nearest Neighbors

Search Result 5, Processing Time 0.022 seconds

Design of Fuzzy k-Nearest Neighbors Classifiers based on Feature Extraction by using Stacked Autoencoder (Stacked Autoencoder를 이용한 특징 추출 기반 Fuzzy k-Nearest Neighbors 패턴 분류기 설계)

  • Rho, Suck-Bum;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.1
    • /
    • pp.113-120
    • /
    • 2015
  • In this paper, we propose a feature extraction method using the stacked autoencoders which consist of restricted Boltzmann machines. The stacked autoencoders is a sort of deep networks. Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can be interpreted as stochastic neural networks. In terms of pattern classification problem, the feature extraction is a key issue. We use the stacked autoencoders networks to extract new features which have a good influence on the improvement of the classification performance. After feature extraction, fuzzy k-nearest neighbors algorithm is used for a classifier which classifies the new extracted data set. To evaluate the classification ability of the proposed pattern classifier, we make some experiments with several machine learning data sets.

Design of Lazy Classifier based on Fuzzy k-Nearest Neighbors and Reconstruction Error (퍼지 k-Nearest Neighbors 와 Reconstruction Error 기반 Lazy Classifier 설계)

  • Roh, Seok-Beom;Ahn, Tae-Chon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.1
    • /
    • pp.101-108
    • /
    • 2010
  • In this paper, we proposed a new lazy classifier with fuzzy k-nearest neighbors approach and feature selection which is based on reconstruction error. Reconstruction error is the performance index for locally linear reconstruction. When a new query point is given, fuzzy k-nearest neighbors approach defines the local area where the local classifier is available and assigns the weighting values to the data patterns which are involved within the local area. After defining the local area and assigning the weighting value, the feature selection is carried out to reduce the dimension of the feature space. When some features are selected in terms of the reconstruction error, the local classifier which is a sort of polynomial is developed using weighted least square estimation. In addition, the experimental application covers a comparative analysis including several previously commonly encountered methods such as standard neural networks, support vector machine, linear discriminant analysis, and C4.5 trees.

Design of Radial Basis Function with the Aid of Fuzzy KNN and Conditional FCM (퍼지 kNN과 Conditional FCM을 이용한 퍼지 RBF의 설계)

  • Roh, Seok-Beon;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.6
    • /
    • pp.1223-1229
    • /
    • 2009
  • The performance of Radial Basis Function Neural Networks depends on setting up the Radial Basis Functions over the input space which are the important design procedure of Radial Basis Function Neural Networks. The existing method to initialize the location of the radial basis functions over the input space is to use the conditional fuzzy C-means clustering. However, the researchers which are interested in the conditional fuzzy C-means clustering cannot get as good modeling performance as they expect because the conditional fuzzy C-means clustering cannot project the information which is extracted over the output space into the input space. To compensate the above mentioned drawback of the conditional fuzzy C-means clustering, we apply a fuzzy K-nearest neighbors approach to project the auxiliary information defined over the output space into the input space without lose of the information.

Assembly performance evaluation method for prefabricated steel structures using deep learning and k-nearest neighbors

  • Hyuntae Bang;Byeongjun Yu;Haemin Jeon
    • Smart Structures and Systems
    • /
    • v.32 no.2
    • /
    • pp.111-121
    • /
    • 2023
  • This study proposes an automated assembly performance evaluation method for prefabricated steel structures (PSSs) using machine learning methods. Assembly component images were segmented using a modified version of the receptive field pyramid. By factorizing channel modulation and the receptive field exploration layers of the convolution pyramid, highly accurate segmentation results were obtained. After completing segmentation, the positions of the bolt holes were calculated using various image processing techniques, such as fuzzy-based edge detection, Hough's line detection, and image perspective transformation. By calculating the distance ratio between bolt holes, the assembly performance of the PSS was estimated using the k-nearest neighbors (kNN) algorithm. The effectiveness of the proposed framework was validated using a 3D PSS printing model and a field test. The results indicated that this approach could recognize assembly components with an intersection over union (IoU) of 95% and evaluate assembly performance with an error of less than 5%.

Optimized KNN/IFCM Algorithm for Efficient Indoor Location (효율적인 실내 측위를 위한 최적화된 KNN/IFCM 알고리즘)

  • Lee, Jang-Jae;Song, Lick-Ho;Kim, Jong-Hwa;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.125-133
    • /
    • 2011
  • For any pattern matching based algorithm in WLAN environment, the characteristics of signal to noise ratio(SNR) to multiple access points(APs) are utilized to establish database in the training phase, and in the estimation phase, the actual two dimensional coordinates of mobile unit(MU) are estimated based on the comparison between the new recorded SNR and fingerprints stored in database. As fingerprinting method, k-nearest neighbor(KNN) has been widely applied for indoor location in wireless location area networks(WLAN), but its performance is sensitive to number of neighbors k and positions of reference points(RPs). So intuitive fuzzy c-means(IFCM) clustering algorithm is applied to improve KNN, which is the KNN/IFCM hybrid algorithm presented in this paper. In the proposed algorithm, through KNN, k RPs are firstly chosen as the data samples of IFCM based on signal to noise ratio(SNR). Then, the k RPs are classified into different clusters through IFCM based on SNR. Experimental results indicate that the proposed KNN/IFCM hybrid algorithm generally outperforms KNN, KNN/FCM, KNN/PFCM algorithm when the locations error is less than 2m.