• Title/Summary/Keyword: Training Data Set

Search Result 812, Processing Time 0.024 seconds

Text-independent Speaker Identification by Bagging VQ Classifier

  • Kyung, Youn-Jeong;Park, Bong-Dae;Lee, Hwang-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2E
    • /
    • pp.17-24
    • /
    • 2001
  • In this paper, we propose the bootstrap and aggregating (bagging) vector quantization (VQ) classifier to improve the performance of the text-independent speaker recognition system. This method generates multiple training data sets by resampling the original training data set, constructs the corresponding VQ classifiers, and then integrates the multiple VQ classifiers into a single classifier by voting. The bagging method has been proven to greatly improve the performance of unstable classifiers. Through two different experiments, this paper shows that the VQ classifier is unstable. In one of these experiments, the bias and variance of a VQ classifier are computed with a waveform database. The variance of the VQ classifier is compared with that of the classification and regression tree (CART) classifier[1]. The variance of the VQ classifier is shown to be as large as that of the CART classifier. The other experiment involves speaker recognition. The speaker recognition rates vary significantly by the minor changes in the training data set. The speaker recognition experiments involving a closed set, text-independent and speaker identification are performed with the TIMIT database to compare the performance of the bagging VQ classifier with that of the conventional VQ classifier. The bagging VQ classifier yields improved performance over the conventional VQ classifier. It also outperforms the conventional VQ classifier in small training data set problems.

  • PDF

Incremental Multi-classification by Least Squares Support Vector Machine

  • Oh, Kwang-Sik;Shim, Joo-Yong;Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.4
    • /
    • pp.965-974
    • /
    • 2003
  • In this paper we propose an incremental classification of multi-class data set by LS-SVM. By encoding the output variable in the training data set appropriately, we obtain a new specific output vectors for the training data sets. Then, online LS-SVM is applied on each newly encoded output vectors. Proposed method will enable the computation cost to be reduced and the training to be performed incrementally. With the incremental formulation of an inverse matrix, the current information and new input data are used for building another new inverse matrix for the estimation of the optimal bias and lagrange multipliers. Computational difficulties of large scale matrix inversion can be avoided. Performance of proposed method are shown via numerical studies and compared with artificial neural network.

  • PDF

Training for Huge Data set with On Line Pruning Regression by LS-SVM

  • Kim, Dae-Hak;Shim, Joo-Yong;Oh, Kwang-Sik
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.137-141
    • /
    • 2003
  • LS-SVM(least squares support vector machine) is a widely applicable and useful machine learning technique for classification and regression analysis. LS-SVM can be a good substitute for statistical method but computational difficulties are still remained to operate the inversion of matrix of huge data set. In modern information society, we can easily get huge data sets by on line or batch mode. For these kind of huge data sets, we suggest an on line pruning regression method by LS-SVM. With relatively small number of pruned support vectors, we can have almost same performance as regression with full data set.

  • PDF

Comparison of EKF and UKF on Training the Artificial Neural Network

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.499-506
    • /
    • 2004
  • The Unscented Kalman Filter is known to outperform the Extended Kalman Filter for the nonlinear state estimation with a significance advantage that it does not require the computation of Jacobian but EKF has a competitive advantage to the UKF on the performance time. We compare both algorithms on training the artificial neural network. The validation data set is used to estimate parameters which are supposed to result in better fitting for the test data set. Experimental results are presented which indicate the performance of both algorithms.

  • PDF

Development of ResNet-based WBC Classification Algorithm Using Super-pixel Image Segmentation

  • Lee, Kyu-Man;Kang, Soon-Ah
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.4
    • /
    • pp.147-153
    • /
    • 2018
  • In this paper, we propose an efficient WBC 14-Diff classification which performs using the WBC-ResNet-152, a type of CNN model. The main point of view is to use Super-pixel for the segmentation of the image of WBC, and to use ResNet for the classification of WBC. A total of 136,164 blood image samples (224x224) were grouped for image segmentation, training, training verification, and final test performance analysis. Image segmentation using super-pixels have different number of images for each classes, so weighted average was applied and therefore image segmentation error was low at 7.23%. Using the training data-set for training 50 times, and using soft-max classifier, TPR average of 80.3% for the training set of 8,827 images was achieved. Based on this, using verification data-set of 21,437 images, 14-Diff classification TPR average of normal WBCs were at 93.4% and TPR average of abnormal WBCs were at 83.3%. The result and methodology of this research demonstrates the usefulness of artificial intelligence technology in the blood cell image classification field. WBC-ResNet-152 based morphology approach is shown to be meaningful and worthwhile method. And based on stored medical data, in-depth diagnosis and early detection of curable diseases is expected to improve the quality of treatment.

Hyper-Rectangle Based Prototype Selection Algorithm Preserving Class Regions (클래스 영역을 보존하는 초월 사각형에 의한 프로토타입 선택 알고리즘)

  • Baek, Byunghyun;Euh, Seongyul;Hwang, Doosung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.83-90
    • /
    • 2020
  • Prototype selection offers the advantage of ensuring low learning time and storage space by selecting the minimum data representative of in-class partitions from the training data. This paper designs a new training data generation method using hyper-rectangles that can be applied to general classification algorithms. Hyper-rectangular regions do not contain different class data and divide the same class space. The median value of the data within a hyper-rectangle is selected as a prototype to form new training data, and the size of the hyper-rectangle is adjusted to reflect the data distribution in the class area. A set cover optimization algorithm is proposed to select the minimum prototype set that represents the whole training data. The proposed method reduces the time complexity that requires the polynomial time of the set cover optimization algorithm by using the greedy algorithm and the distance equation without multiplication. In experimented comparison with hyper-sphere prototype selections, the proposed method is superior in terms of prototype rate and generalization performance.

AN APPROACH TO THE TRAINING OF A SUPPORT VECTOR MACHINE (SVM) CLASSIFIER USING SMALL MIXED PIXELS

  • Yu, Byeong-Hyeok;Chi, Kwang-Hoon
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.386-389
    • /
    • 2008
  • It is important that the training stage of a supervised classification is designed to provide the spectral information. On the design of the training stage of a classification typically calls for the use of a large sample of randomly selected pure pixels in order to characterize the classes. Such guidance is generally made without regard to the specific nature of the application in-hand, including the classifier to be used. An approach to the training of a support vector machine (SVM) classifier that is the opposite of that generally promoted for training set design is suggested. This approach uses a small sample of mixed spectral responses drawn from purposefully selected locations (geographical boundaries) in training. A sample of such data should, however, be easier and cheaper to acquire than that suggested by traditional approaches. In this research, we evaluated them against traditional approaches with high-resolution satellite data. The results proved that it can be used small mixed pixels to derive a classification with similar accuracy using a large number of pure pixels. The approach can also reduce substantial costs in training data acquisition because the sampling locations used are commonly easy to observe.

  • PDF

Japanese Vowel Sound Classification Using Fuzzy Inference System

  • Phitakwinai, Suwannee;Sawada, Hideyuki;Auephanwiriyakul, Sansanee;Theera-Umpon, Nipon
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.1
    • /
    • pp.35-41
    • /
    • 2014
  • An automatic speech recognition system is one of the popular research problems. There are many research groups working in this field for different language including Japanese. Japanese vowel recognition is one of important parts in the Japanese speech recognition system. The vowel classification system with the Mamdani fuzzy inference system was developed in this research. We tested our system on the blind test data set collected from one male native Japanese speaker and four male non-native Japanese speakers. All subjects in the blind test data set were not the same subjects in the training data set. We found out that the classification rate from the training data set is 95.0 %. In the speaker-independent experiments, the classification rate from the native speaker is around 70.0 %, whereas that from the non-native speakers is around 80.5 %.

Prototype-Based Classification Using Class Hyperspheres (클래스 초월구를 이용한 프로토타입 기반 분류)

  • Lee, Hyun-Jong;Hwang, Doosung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.10
    • /
    • pp.483-488
    • /
    • 2016
  • In this paper, we propose a prototype-based classification learning by using the nearest-neighbor rule. The nearest-neighbor is applied to segment the class area of all the training data with hyperspheres, and a hypersphere must cover the data from the same class. The radius of a hypersphere is computed by the mid point of the two distances to the farthest same class point and the nearest other class point. And we transform the prototype selection problem into a set covering problem in order to determine the smallest set of prototypes that cover all the training data. The proposed prototype selection method is designed by a greedy algorithm and applicable to process a large-scale training set in parallel. The prediction rule is the nearest-neighbor rule and the new training data is the set of prototypes. In experiments, the generalization performance of the proposed method is superior to existing methods.

Derivation of MCE/GPD Training Algorithm Applicable to Weighted Hidden Markov Models (WHMM에 적용가능한 MCE/GPD 학습알고리듬에 관한 연구)

  • Choi, Hong-Sub
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1
    • /
    • pp.104-109
    • /
    • 1997
  • This paper derives a new training alorithm for WHMM using the well-known MCE/GPD method with experimental results on the E-set. The derived algorithm generalizes the conventional adaptive training algorithm for WHMM, which means that HMMs of multiple competing classes can be trained at the same time. The recognition results on the E-set have shown about 15% and 12% improvement for training and test data, respectively.

  • PDF