• Title/Summary/Keyword: Feature dimension reduction

Search Result 106, Processing Time 0.026 seconds

Classification of pathological and normal voice based on dimension reduction of feature vectors (피처벡터 축소방법에 기반한 장애음성 분류)

  • Lee, Ji-Yeoun;Jeong, Sang-Bae;Choi, Hong-Shik;Hahn, Min-Soo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.123-126
    • /
    • 2007
  • This paper suggests a method to improve the performance of the pathological/normal voice classification. The effectiveness of the mel frequency-based filter bank energies using the fisher discriminant ratio (FDR) is analyzed. And mel frequency cepstrum coefficients (MFCCs) and the feature vectors through the linear discriminant analysis (LDA) transformation of the filter bank energies (FBE) are implemented. This paper shows that the FBE LDA-based GMM is more distinct method for the pathological/normal voice classification than the MFCC-based GMM.

  • PDF

A Visual Hypernetwork Model Using Eye-Gaze-Information-Based Active Sampling (안구운동추적 정보기반 능동적 샘플링을 반영한 시각 하이퍼네트워크 모델)

  • Kim, Eun-Sol;Kim, Ji-Seop;Amaro, Karinne Ramirez;Beetz, Michael;Jang, Byeong-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.324-326
    • /
    • 2012
  • 기계 학습에서 입력 데이터의 차원을 줄이는 문제(dimension reduction)는 매우 중요한 문제 중의 하나이다. 입력 변수의 차원이 늘어남에 따라 처리해야하는 연산의 수와 계산 복잡도가 급격히 늘어나기 때문이다. 이를 해결하기 위하여 다수의 기계 학습 알고리즘은 명시적으로 차원을 줄이거나(feature selection), 데이터에 약간의 연산을 가하여 차원이 작은 새로운 입력 데이터를 만든다(feature extraction). 반면 사람이 여러 종류의 고차원 센서 데이터를 입력받아 빠른 시간 안에 정확하게 정보를 처리할 수 있는 가장 큰 이유 중 하나는 실시간으로 판단하여 가장 필요한 정보에 집중하기 때문이다. 본 연구는 사람의 정보 처리 과정을 기계 학습 알고리즘에 반영하여, 집중도를 이용하여 효율적으로 데이터를 처리하는 방법을 제시한다. 이 성질을 시각 하이퍼네트워크 모델에 반영하여, 효율적으로 고차원 입력 데이터를 다루는 방법을 제안한다. 실험에서는 시각 하이퍼네트워크를 이용하여 고차원의 이미지 데이터에서 행동을 분류하였다.

A Fast Method for Face Detection based on PCA and SVM

  • Xia, Chun-Lei;Shin, Hyeon-Gab;Ha, Seok-Wun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.153-156
    • /
    • 2007
  • In this paper, we propose a fast face detection approach using PCA and SVM. In our detection system, first we filter the face potential area using statistical feature which is generated by analyzing local histogram distribution. And then, we use SVM classifier to detect whether there are faces present in the test image. Support Vector Machine (SVM) has great performance in classification task. PCA is used for dimension reduction of sample data. After PCA transform, the feature vectors, which are used for training SVM classifier, are generated. Our tests in this paper are based on CMU face database.

  • PDF

Feature Extraction of Basal Cell Carcinoma with Decision Tree (결정 트리를 이용한 기저 세포암 특징 추출)

  • Park, Aa-Ron;Baek, Seong-Joon;Won, Yong-Gwan;Kim, Dong-Kook
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.239-240
    • /
    • 2006
  • In this study, we examined all peaks of confocal Raman spectra as peaks are the most important features for discrimination between basal cell carcinoma (BCC) and normal tissue (NOR). 14 peaks were extracted from these peaks using decision tree. For dimension reduction, frequently selected 4 peaks were chosen. They are located at 1014, 1095, 1439, $1523cm^{-1}$. These peaks were used as an input feature of the multilayer perceptron networks (MLP). According to the experimental results, MLP gave classification error rate of about 6.5%.

  • PDF

Effective Combination of Temporal Information and Linear Transformation of Feature Vector in Speaker Verification (화자확인에서 특징벡터의 순시 정보와 선형 변환의 효과적인 적용)

  • Seo, Chang-Woo;Zhao, Mei-Hua;Lim, Young-Hwan;Jeon, Sung-Chae
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.127-132
    • /
    • 2009
  • The feature vectors which are used in conventional speaker recognition (SR) systems may have many correlations between their neighbors. To improve the performance of the SR, many researchers adopted linear transformation method like principal component analysis (PCA). In general, the linear transformation of the feature vectors is based on concatenated form of the static features and their dynamic features. However, the linear transformation which based on both the static features and their dynamic features is more complex than that based on the static features alone due to the high order of the features. To overcome these problems, we propose an efficient method that applies linear transformation and temporal information of the features to reduce complexity and improve the performance in speaker verification (SV). The proposed method first performs a linear transformation by PCA coefficients. The delta parameters for temporal information are then obtained from the transformed features. The proposed method only requires 1/4 in the size of the covariance matrix compared with adding the static and their dynamic features for PCA coefficients. Also, the delta parameters are extracted from the linearly transformed features after the reduction of dimension in the static features. Compared with the PCA and conventional methods in terms of equal error rate (EER) in SV, the proposed method shows better performance while requiring less storage space and complexity.

  • PDF

Scalable Hybrid Recommender System with Temporal Information (시간 정보를 이용한 확장성 있는 하이브리드 Recommender 시스템)

  • Ullah, Farman;Sarwar, Ghulam;Kim, Jae-Woo;Moon, Kyeong-Deok;Kim, Jin-Tae;Lee, Sung-Chang
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.61-68
    • /
    • 2012
  • Recommender Systems have gained much popularity among researchers and is applied in a number of applications. The exponential growth of users and products poses some key challenges for recommender systems. Recommender Systems mostly suffer from scalability and accuracy. The accuracy of Recommender system is somehow inversely proportional to its scalability. In this paper we proposed a Context Aware Hybrid Recommender System using matrix reduction for Hybrid model and clustering technique for predication of item features. In our approach we used user item-feature rating, User Demographic information and context information i.e. specific time and day to improve scalability and accuracy. Our Algorithm produce better results because we reduce the dimension of items features matrix by using different reduction techniques and use user demographic information, construct context aware hybrid user model, cluster the similar user offline, find the nearest neighbors, predict the item features and recommend the Top N- items.

Dimension Reduction of Solid Models by Mid-Surface Generation

  • Sheen, Dong-Pyoung;Son, Tae-Geun;Ryu, Cheol-Ho;Lee, Sang-Hun;Lee, Kun-Woo
    • International Journal of CAD/CAM
    • /
    • v.7 no.1
    • /
    • pp.71-80
    • /
    • 2007
  • Recently, feature-based solid modeling systems have been widely used in product design. However, for engineering analysis of a product model, an ed CAD model composed of mid-surfaces is desirable for conditions in which the ed model does not affect analysis result seriously. To meet this requirement, a variety of solid ion methods such as MAT (medial axis transformation) have been proposed to provide an ed CAE model from a solid design model. The algorithm of the MAT approach can be applied to any complicated solid model. However, additional work to trim and extend some parts of the result is required to obtain a practically useful CAE model because the inscribed sphere used in the MAT method generates insufficient surfaces with branches. On the other hand, the mid-surface ion approach supports a practical method for generating a two-dimensional ed model, even though it has difficulties in creating a mid-surface from some complicated parts. In this paper, we propose a dimension reduction approach on solid models based on the midsurface abstraction approach. This approach simplifies the solid model by abbreviating or removing trivial features first such as the fillet, mounting, or protrusion. The geometry of each face is replaced with mid-patches from the simplified model, and then unnecessary topological entities are deleted to generate a clean ed model. Also, additional work, such as extending and stitching mid-patches, completes the generation of a mid-surface model from the patches.

A Comparative Experiment on Dimensional Reduction Methods Applicable for Dissimilarity-Based Classifications (비유사도-기반 분류를 위한 차원 축소방법의 비교 실험)

  • Kim, Sang-Woon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.59-66
    • /
    • 2016
  • This paper presents an empirical evaluation on dimensionality reduction strategies by which dissimilarity-based classifications (DBC) can be implemented efficiently. In DBC, classification is not based on feature measurements of individual objects (a set of attributes), but rather on a suitable dissimilarity measure among the individual objects (pair-wise object comparisons). One problem of DBC is the high dimensionality of the dissimilarity space when a lots of objects are treated. To address this issue, two kinds of solutions have been proposed in the literature: prototype selection (PS)-based methods and dimension reduction (DR)-based methods. In this paper, instead of utilizing the PS-based or DR-based methods, a way of performing DBC in Eigen spaces (ES) is considered and empirically compared. In ES-based DBC, classifications are performed as follows: first, a set of principal eigenvectors is extracted from the training data set using a principal component analysis; second, an Eigen space is expanded using a subset of the extracted and selected Eigen vectors; third, after measuring distances among the projected objects in the Eigen space using $l_p$-norms as the dissimilarity, classification is performed. The experimental results, which are obtained using the nearest neighbor rule with artificial and real-life benchmark data sets, demonstrate that when the dimensionality of the Eigen spaces has been selected appropriately, compared to the PS-based and DR-based methods, the performance of the ES-based DBC can be improved in terms of the classification accuracy.

Improved Feature Descriptor Extraction and Matching Method for Efficient Image Stitching on Mobile Environment (모바일 환경에서 효율적인 영상 정합을 위한 향상된 특징점 기술자 추출 및 정합 기법)

  • Park, Jin-Yang;Ahn, Hyo Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.39-46
    • /
    • 2013
  • Recently, the mobile industries grow up rapidly and their performances are improved. So the usage of mobile devices is increasing in our life. Also mobile devices equipped with a high-performance camera, so the image stitching can carry out on the mobile devices instead of the desktop. However the mobile devices have limited hardware to perform the image stitching which has a lot of computational complexity. In this paper, we have proposed improved feature descriptor extraction and matching method for efficient image stitching on mobile environment. Our method can reduce computational complexity using extension of orientation window and reduction of dimension feature descriptor when feature descriptor is generated. In addition, the computational complexity of image stitching is reduced through the classification of matching points. In our results, our method makes to improve the computational time of image stitching than the previous method. Therefore our method is suitable for the mobile environment and also that method can make natural-looking stitched image.

Supervised Rank Normalization for Support Vector Machines (SVM을 위한 교사 랭크 정규화)

  • Lee, Soojong;Heo, Gyeongyong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.31-38
    • /
    • 2013
  • Feature normalization as a pre-processing step has been widely used in classification problems to reduce the effect of different scale in each feature dimension and error as a result. Most of the existing methods, however, assume some distribution function on feature distribution. Even worse, existing methods do not use the labels of data points and, as a result, do not guarantee the optimality of the normalization results in classification. In this paper, proposed is a supervised rank normalization which combines rank normalization and a supervised learning technique. The proposed method does not assume any feature distribution like rank normalization and uses class labels of nearest neighbors in classification to reduce error. SVM, in particular, tries to draw a decision boundary in the middle of class overlapping zone, the reduction of data density in that area helps SVM to find a decision boundary reducing generalized error. All the things mentioned above can be verified through experimental results.