• Title/Summary/Keyword: 행렬 학습

Search Result 180, Processing Time 0.023 seconds

RPCA-GMM for Speaker Identification (화자식별을 위한 강인한 주성분 분석 가우시안 혼합 모델)

  • 이윤정;서창우;강상기;이기용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.519-527
    • /
    • 2003
  • Speech is much influenced by the existence of outliers which are introduced by such an unexpected happenings as additive background noise, change of speaker's utterance pattern and voice detection errors. These kinds of outliers may result in severe degradation of speaker recognition performance. In this paper, we proposed the GMM based on robust principal component analysis (RPCA-GMM) using M-estimation to solve the problems of both ouliers and high dimensionality of training feature vectors in speaker identification. Firstly, a new feature vector with reduced dimension is obtained by robust PCA obtained from M-estimation. The robust PCA transforms the original dimensional feature vector onto the reduced dimensional linear subspace that is spanned by the leading eigenvectors of the covariance matrix of feature vector. Secondly, the GMM with diagonal covariance matrix is obtained from these transformed feature vectors. We peformed speaker identification experiments to show the effectiveness of the proposed method. We compared the proposed method (RPCA-GMM) with transformed feature vectors to the PCA and the conventional GMM with diagonal matrix. Whenever the portion of outliers increases by every 2%, the proposed method maintains almost same speaker identification rate with 0.03% of little degradation, while the conventional GMM and the PCA shows much degradation of that by 0.65% and 0.55%, respectively This means that our method is more robust to the existence of outlier.

Forensic Image Classification using Data Mining Decision Tree (데이터 마이닝 결정나무를 이용한 포렌식 영상의 분류)

  • RHEE, Kang Hyeon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.7
    • /
    • pp.49-55
    • /
    • 2016
  • In digital forensic images, there is a serious problem that is distributed with various image types. For the problem solution, this paper proposes a classification algorithm of the forensic image types. The proposed algorithm extracts the 21-dim. feature vector with the contrast and energy from GLCM (Gray Level Co-occurrence Matrix), and the entropy of each image type. The classification test of the forensic images is performed with an exhaustive combination of the image types. Through the experiments, TP (True Positive) and FN (False Negative) is detected respectively. While it is confirmed that performed class evaluation of the proposed algorithm is rated as 'Excellent(A)' because of the AUROC (Area Under Receiver Operating Characteristic Curve) is 0.9980 by the sensitivity and the 1-specificity. Also, the minimum average decision error is 0.1349. Also, at the minimum average decision error is 0.0179, the whole forensic image types which are involved then, our classification effectiveness is high.

An Efficient Multidimensional Scaling Method based on CUDA and Divide-and-Conquer (CUDA 및 분할-정복 기반의 효율적인 다차원 척도법)

  • Park, Sung-In;Hwang, Kyu-Baek
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.427-431
    • /
    • 2010
  • Multidimensional scaling (MDS) is a widely used method for dimensionality reduction, of which purpose is to represent high-dimensional data in a low-dimensional space while preserving distances among objects as much as possible. MDS has mainly been applied to data visualization and feature selection. Among various MDS methods, the classical MDS is not readily applicable to data which has large numbers of objects, on normal desktop computers due to its computational complexity. More precisely, it needs to solve eigenpair problems on dissimilarity matrices based on Euclidean distance. Thus, running time and required memory of the classical MDS highly increase as n (the number of objects) grows up, restricting its use in large-scale domains. In this paper, we propose an efficient approximation algorithm for the classical MDS based on divide-and-conquer and CUDA. Through a set of experiments, we show that our approach is highly efficient and effective for analysis and visualization of data consisting of several thousands of objects.

Clustering Gene Expression Data by MCL Algorithm (MCL 알고리즘을 사용한 유전자 발현 데이터 클러스터링)

  • Shon, Ho-Sun;Ryu, Keun-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.4
    • /
    • pp.27-33
    • /
    • 2008
  • The clustering of gene expression data is used to analyze the results of microarray studies. This clustering is one of the frequently used methods in understanding degrees of biological change and gene expression. In biological research, MCL algorithm is an algorithm that clusters nodes within a graph, and is quick and efficient. We have modified the existing MCL algorithm and applied it to microarray data. In applying the MCL algorithm we put forth a simulation that adjusts two factors, namely inflation and diagonal tent and converted them by making use of Markov matrix. Furthermore, in order to distinguish class more clearly in the modified MCL algorithm we took the average of each row and used it as a threshold. Therefore, the improved algorithm can increase accuracy better than the existing ones. In other words, in the actual experiment, it showed an average of 70% accuracy when compared with an existing class. We also compared the MCL algorithm with the self-organizing map(SOM) clustering, K-means clustering and hierarchical clustering (HC) algorithms. And the result showed that it showed better results than ones derived from hierarchical clustering and K-means method.

Audio signal clustering and separation using a stacked autoencoder (복층 자기부호화기를 이용한 음향 신호 군집화 및 분리)

  • Jang, Gil-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.4
    • /
    • pp.303-309
    • /
    • 2016
  • This paper proposes a novel approach to the problem of audio signal clustering using a stacked autoencoder. The proposed stacked autoencoder learns an efficient representation for the input signal, enables clustering constituent signals with similar characteristics, and therefore the original sources can be separated based on the clustering results. STFT (Short-Time Fourier Transform) is performed to extract time-frequency spectrum, and rectangular windows at all the possible locations are used as input values to the autoencoder. The outputs at the middle, encoding layer, are used to cluster the rectangular windows and the original sources are separated by the Wiener filters derived from the clustering results. Source separation experiments were carried out in comparison to the conventional NMF (Non-negative Matrix Factorization), and the estimated sources by the proposed method well represent the characteristics of the orignal sources as shown in the time-frequency representation.

Classifier Integration Model for Image Classification (영상 분류를 위한 분류기 통합모델)

  • Park, Dong-Chul
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.2
    • /
    • pp.96-102
    • /
    • 2012
  • An advanced form of the Partitioned Feature-based Classifier with Expertise Table(PFC-ET) is proposed in this paper. As is the case with the PFC-ET, the proposed classifier model, called Classifier Integration Model(CIM), does not use the entire feature vectors extracted from the original data in a concatenated form to classify each datum, but rather uses groups of features related to each feature vector separately. The proposed CIM utilizes a proportion of selected cluster members instead of the expertise table in PFC-ET to minimize the error in confusion table. The proposed CIM is applied to the classification problem on two data sets, Caltech data set and collected terrain data sets. When compared with PFC model and PFC-ET model. the proposed CIM shows improvements in terms of classification accuracy and post processing efforts.

Predicting Early Retirees Using Personality Data (인성 데이터를 활용한 조기 퇴사자 예측)

  • Kim, Young Park;Kim, Hyoung Joong
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.141-147
    • /
    • 2018
  • This study analyzed the early retired employees who stayed in company no longer than 3 years based on a certain company's personality evaluation result data. The predicted model was analyzed by dividing into two categories; the manufacture group and the R&D group. Independent variables were selected according to the stepwise method. A logistic regression model was selected as a prediction model among various supervised learning methods, and trained through cross-validation to prevent over-fitting or under-fitting. The accuracy of the two groups were confirmed by the confusion matrix. The most influential factor for early retirement in the manufacture group was revealed as "immersion," and for the R&D group appeared as "antisocial." In the past, people concentrated on collecting data by questionnaire and identifying factors that are highly related to the retirement, but this study suggests a sustainable early retirement prediction model in the future by analyzing the tangible outcome of the recruitment process.

Detection of Cropland in Reservoir Area by Using Supervised Classification of UAV Imagery Based on GLCM (GLCM 기반 UAV 영상의 감독분류를 이용한 저수구역 내 농경지 탐지)

  • Kim, Gyu Mun;Choi, Jae Wan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.433-442
    • /
    • 2018
  • The reservoir area is defined as the area surrounded by the planned flood level of the dam or the land under the planned flood level of the dam. In this study, supervised classification based on RF (Random Forest), which is a representative machine learning technique, was performed to detect cropland in the reservoir area. In order to classify the cropland in the reservoir area efficiently, the GLCM (Gray Level Co-occurrence Matrix), which is a representative technique to quantify texture information, NDWI (Normalized Difference Water Index) and NDVI (Normalized Difference Vegetation Index) were utilized as additional features during classification process. In particular, we analyzed the effect of texture information according to window size for generating GLCM, and suggested a methodology for detecting croplands in the reservoir area. In the experimental result, the classification result showed that cropland in the reservoir area could be detected by the multispectral, NDVI, NDWI and GLCM images of UAV, efficiently. Especially, the window size of GLCM was an important parameter to increase the classification accuracy.

Unsupervised feature selection using orthogonal decomposition and low-rank approximation

  • Lim, Hyunki
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.77-84
    • /
    • 2022
  • In this paper, we propose a novel unsupervised feature selection method. Conventional unsupervised feature selection method defines virtual label and uses a regression analysis that projects the given data to this label. However, since virtual labels are generated from data, they can be formed similarly in the space. Thus, in the conventional method, the features can be selected in only restricted space. To solve this problem, in this paper, features are selected using orthogonal projections and low-rank approximations. To solve this problem, in this paper, a virtual label is projected to orthogonal space and the given data set is also projected to this space. Through this process, effective features can be selected. In addition, projection matrix is restricted low-rank to allow more effective features to be selected in low-dimensional space. To achieve these objectives, a cost function is designed and an efficient optimization method is proposed. Experimental results for six data sets demonstrate that the proposed method outperforms existing conventional unsupervised feature selection methods in most cases.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.