• Title/Summary/Keyword: weak classifier

Search Result 41, Processing Time 0.03 seconds

Improvement of Face Recognition Speed Using Pose Estimation (얼굴의 자세추정을 이용한 얼굴인식 속도 향상)

  • Choi, Sun-Hyung;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.677-682
    • /
    • 2010
  • This paper addresses a method of estimating roughly the human pose by comparing Haar-wavelet value which is learned in face detection technology using AdaBoost algorithm. We also presents its application to face recognition. The learned weak classifier is used to a Haar-wavelet robust to each pose's feature by comparing the coefficients during the process of face detection. The Mahalanobis distance is used to measure the matching degree in Haar-wavelet selection. When a facial image is detected using the selected Haar-wavelet, the pose is estimated. The proposed pose estimation can be used to improve face recognition speed. Experiments are conducted to evaluate the performance of the proposed method for pose estimation.

Real-Time Object Recognition for Children Education Applications based on Augmented Reality (증강현실 기반 아동 학습 어플리케이션을 위한 실시간 영상 인식)

  • Park, Kang-Kyu;Yi, Kang
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.17-31
    • /
    • 2017
  • The aim of the paper is to present an object recognition method toward augmented reality system that utilizes existing education instruments that was designed without any consideration on image processing and recognition. The light reflection, sizes, shapes, and color range of the existing target education instruments are major hurdles to our object recognition. In addition, the real-time performance requirements on embedded devices and user experience constraints for children users are quite challenging issues to be solved for our image processing and object recognition approach. In order to meet these requirements we employed a method cascading light-weight weak classification methods that are complimentary each other to make a resultant complicated and highly accurate object classifier toward practically reasonable precision ratio. We implemented the proposed method and tested the performance by video with more than 11,700 frames of actual playing scenario. The experimental result showed 0.54% miss ratio and 1.35% false hit ratio.

Gait Phase Recognition based on EMG Signal for Stairs Ascending and Stairs Descending (상·하향 계단보행을 위한 근전도 신호 기반 보행단계 인식)

  • Lee, Mi-Ran;Ryu, Jae-Hwan;Kim, Sang-Ho;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.3
    • /
    • pp.181-189
    • /
    • 2015
  • Powered prosthesis is used to assist walking of people with an amputated lower limb and/or weak leg strength. The accurate gait phase classification is indispensable in smooth movement control of the powered prosthesis. In previous gait phase classification using physical sensors, there is limitation that powered prosthesis should be simulated as same as the speed of training process. Therefore, we propose EMG signal based gait phase recognition method to classify stairs ascending and stairs descending into four steps without using physical sensors, respectively. RMS, VAR, MAV, SSC, ZC, WAMP features are extracted from EMG signal data and LDA(Linear Discriminant Analysis) classifier is used. In the training process, the AHRS sensor produces various ranges of walking steps according to the change of knee angles. The experimental results show that the average accuracies of the proposed method are about 85.6% in stairs ascending and 69.5% in stairs descending whereas those of preliminary studies are about 58.5% in stairs ascending and 35.3% in stairs descending. In addition, we can analyze the average recognition ratio of each gait step with respect to the individual muscle.

An Active Learning-based Method for Composing Training Document Set in Bayesian Text Classification Systems (베이지언 문서분류시스템을 위한 능동적 학습 기반의 학습문서집합 구성방법)

  • 김제욱;김한준;이상구
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.12
    • /
    • pp.966-978
    • /
    • 2002
  • There are two important problems in improving text classification systems based on machine learning approach. The first one, called "selection problem", is how to select a minimum number of informative documents from a given document collection. The second one, called "composition problem", is how to reorganize selected training documents so that they can fit an adopted learning method. The former problem is addressed in "active learning" algorithms, and the latter is discussed in "boosting" algorithms. This paper proposes a new learning method, called AdaBUS, which proactively solves the above problems in the context of Naive Bayes classification systems. The proposed method constructs more accurate classification hypothesis by increasing the valiance in "weak" hypotheses that determine the final classification hypothesis. Consequently, the proposed algorithm yields perturbation effect makes the boosting algorithm work properly. Through the empirical experiment using the Routers-21578 document collection, we show that the AdaBUS algorithm more significantly improves the Naive Bayes-based classification system than other conventional learning methodson system than other conventional learning methods

Real-Time Face Detection and Tracking Using the AdaBoost Algorithm (AdaBoost 알고리즘을 이용한 실시간 얼굴 검출 및 추적)

  • Lee, Wu-Ju;Kim, Jin-Chul;Lee, Bae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.10
    • /
    • pp.1266-1275
    • /
    • 2006
  • In this paper, we propose a real-lime face detection and tracking algorithm using AdaBoost(Adaptive Boosting) algorithm. The proposed algorithm consists of two levels such as the face detection and the face tracking. First, the face detection used the eight-wavelet feature models which ate very simple. Each feature model applied to variable size and position, and then create initial feature set. The intial feature set and the training images which were consisted of face images, non-face images used the AdaBoost algorithm. The basic principal of the AdaBoost algorithm is to create final strong classifier joining linearly weak classifiers. In the training of the AdaBoost algorithm, we propose SAT(Summed-Area Table) method. Face tracking becomes accomplished at real-time using the position information and the size information of detected face, and it is extended view region dynamically using the fan-Tilt camera. We are setting to move center of the detected face to center of the Image. The experiment results were amply satisfied with the computational efficiency and the detection rates. In real-time application using Pan-Tilt camera, the detecter runs at about 12 frames per second.

  • PDF

Super-resolution Algorithm Using Adaptive Unsharp Masking for Infra-red Images (적외선 영상을 위한 적응적 언샤프 마스킹을 이용한 초고해상도 알고리즘)

  • Kim, Yong-Jun;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.180-191
    • /
    • 2016
  • When up-scaling algorithms for visible light images are applied to infrared (IR) images, they rarely work because IR images are usually blurred. In order to solve such a problem, this paper proposes an up-scaling algorithm for IR images. We employ adaptive dynamic range encoding (ADRC) as a simple classifier based on the observation that IR images have weak details. Also, since human visual systems are more sensitive to edges, our algorithm focuses on edges. Then, we add pre-processing in learning phase. As a result, we can improve visibility of IR images without increasing computational cost. Comparing with Anchored neighborhood regression (A+), the proposed algorithm provides better results. In terms of just noticeable blur, the proposed algorithm shows higher values by 0.0201 than the A+, respectively.

Study of Computer Aided Diagnosis for the Improvement of Survival Rate of Lung Cancer based on Adaboost Learning (폐암 생존율 향상을 위한 아다부스트 학습 기반의 컴퓨터보조 진단방법에 관한 연구)

  • Won, Chulho
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.10 no.1
    • /
    • pp.87-92
    • /
    • 2016
  • In this paper, we improved classification performance of benign and malignant lung nodules by including the parenchyma features. For small pulmonary nodules (4-10mm) nodules, there are a limited number of CT data voxels within the solid tumor, making them difficult to process through traditional CAD(computer aided diagnosis) tools. Increasing feature extraction to include the surrounding parenchyma will increase the CT voxel set for analysis in these very small pulmonary nodule cases and likely improve diagnostic performance while keeping the CAD tool flexible to scanner model and parameters. In AdaBoost learning using naive Bayes and SVM weak classifier, a number of significant features were selected from 304 features. The results from the COPDGene test yielded an accuracy, sensitivity and specificity of 100%. Therefore proposed method can be used for the computer aided diagnosis effectively.

Multi-target Classification Method Based on Adaboost and Radial Basis Function (아이다부스트(Adaboost)와 원형기반함수를 이용한 다중표적 분류 기법)

  • Kim, Jae-Hyup;Jang, Kyung-Hyun;Lee, Jun-Haeng;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.22-28
    • /
    • 2010
  • Adaboost is well known for a representative learner as one of the kernel methods. Adaboost which is based on the statistical learning theory shows good generalization performance and has been applied to various pattern recognition problems. However, Adaboost is basically to deal with a two-class classification problem, so we cannot solve directly a multi-class problem with Adaboost. One-Vs-All and Pair-Wise have been applied to solve the multi-class classification problem, which is one of the multi-class problems. The two methods above are ones of the output coding methods, a general approach for solving multi-class problem with multiple binary classifiers, which decomposes a complex multi-class problem into a set of binary problems and then reconstructs the outputs of binary classifiers for each binary problem. However, two methods cannot show good performance. In this paper, we propose the method to solve a multi-target classification problem by using radial basis function of Adaboost weak classifier.

License Plate Detection with Improved Adaboost Learning based on Newton's Optimization and MCT (뉴턴 최적화를 통해 개선된 아다부스트 훈련과 MCT 특징을 이용한 번호판 검출)

  • Lee, Young-Hyun;Kim, Dae-Hun;Ko, Han-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.71-82
    • /
    • 2012
  • In this paper, we propose a license plate detection method with improved Adaboost learning and MCT (Modified Census Transform). The MCT represents the local structure patterns as integer numbered feature values which has robustness to illumination change and memory efficiency. However, since these integer values are discrete, a lookup table is needed to design a weak classifier for Adaboost learning. Some previous research efforts have focused on minimization of exponential criterion for Adaboost optimization. In this paper, a method that uses MCT and improved Adaboost learning based on Newton's optimization to exponential criterion is proposed for license plate detection. Experimental results on license patch images and field images demonstrate that the proposed method yields higher performance of detection rates with low false positives than the conventional method using the original Adaboost learning.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.