• 제목/요약/키워드: AdaBoost classifier

검색결과 62건 처리시간 0.026초

Automatic Emotion Classification of Music Signals Using MDCT-Driven Timbre and Tempo Features

  • Kim, Hyoung-Gook;Eom, Ki-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • 제25권2E호
    • /
    • pp.74-78
    • /
    • 2006
  • This paper proposes an effective method for classifying emotions of the music from its acoustical signals. Two feature sets, timbre and tempo, are directly extracted from the modified discrete cosine transform coefficients (MDCT), which are the output of partial MP3 (MPEG 1 Layer 3) decoder. Our tempo feature extraction method is based on the long-term modulation spectrum analysis. In order to effectively combine these two feature sets with different time resolution in an integrated system, a classifier with two layers based on AdaBoost algorithm is used. In the first layer the MDCT-driven timbre features are employed. By adding the MDCT-driven tempo feature in the second layer, the classification precision is improved dramatically.

특징들의 공유에 의한 기울어진 얼굴 검출 (Rotated face detection based on sharing features)

  • 송영모;고윤호
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.31-33
    • /
    • 2009
  • Face detection using AdaBoost algorithm is capable of processing images rapidly while having high detection rates. It seemed to be the fastest and the most robust and it is still today. Many improvements or extensions of this method have been proposed. However, previous approaches only deal with upright faces. They suffer from limited discriminant capability for rotated faces as these methods apply the same features for both upright and rotated faces. To solve this problem, it is necessary that we rotate input images or make independently trained detectors. However, this can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. This paper proposes a robust algorithm for finding rotated faces within an image. It reduces the computational and sample complexity, by finding common features that can be shared across the classes. And it will be able to apply with multi-class object detection.

  • PDF

A Two-Stage Approach to Pedestrian Detection with a Moving Camera

  • Kim, Miae;Kim, Chang-Su
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제2권4호
    • /
    • pp.189-196
    • /
    • 2013
  • This paper presents a two-stage approach to detect pedestrians in video sequences taken from a moving vehicle. The first stage is a preprocessing step, in which potential pedestrians are hypothesized. During the preprocessing step, a difference image is constructed using a global motion estimation, vertical and horizontal edge maps are extracted, and the color difference between the road and pedestrians are determined to create candidate regions where pedestrians may be present. The candidate regions are refined further using the vertical edge symmetry features of the pedestrians' legs. In the next stage, each hypothesis is verified using the integral channel features and an AdaBoost classifier. In this stage, a decision is made as to whether or not each candidate region contains a pedestrian. The proposed algorithm was tested on a range of dataset images and showed good performance.

  • PDF

Human and Robot Tracking Using Histogram of Oriented Gradient Feature

  • Lee, Jeong-eom;Yi, Chong-ho;Kim, Dong-won
    • Journal of Platform Technology
    • /
    • 제6권4호
    • /
    • pp.18-25
    • /
    • 2018
  • This paper describes a real-time human and robot tracking method in Intelligent Space with multi-camera networks. The proposed method detects candidates for humans and robots by using the histogram of oriented gradients (HOG) feature in an image. To classify humans and robots from the candidates in real time, we apply cascaded structure to constructing a strong classifier which consists of many weak classifiers as follows: a linear support vector machine (SVM) and a radial-basis function (RBF) SVM. By using the multiple view geometry, the method estimates the 3D position of humans and robots from their 2D coordinates on image coordinate system, and tracks their positions by using stochastic approach. To test the performance of the method, humans and robots are asked to move according to given rectangular and circular paths. Experimental results show that the proposed method is able to reduce the localization error and be good for a practical application of human-centered services in the Intelligent Space.

Infrared Target Recognition using Heterogeneous Features with Multi-kernel Transfer Learning

  • Wang, Xin;Zhang, Xin;Ning, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권9호
    • /
    • pp.3762-3781
    • /
    • 2020
  • Infrared pedestrian target recognition is a vital problem of significant interest in computer vision. In this work, a novel infrared pedestrian target recognition method that uses heterogeneous features with multi-kernel transfer learning is proposed. Firstly, to exploit the characteristics of infrared pedestrian targets fully, a novel multi-scale monogenic filtering-based completed local binary pattern descriptor, referred to as MSMF-CLBP, is designed to extract the texture information, and then an improved histogram of oriented gradient-fisher vector descriptor, referred to as HOG-FV, is proposed to extract the shape information. Second, to enrich the semantic content of feature expression, these two heterogeneous features are integrated to get more complete representation for infrared pedestrian targets. Third, to overcome the defects, such as poor generalization, scarcity of tagged infrared samples, distributional and semantic deviations between the training and testing samples, of the state-of-the-art classifiers, an effective multi-kernel transfer learning classifier called MK-TrAdaBoost is designed. Experimental results show that the proposed method outperforms many state-of-the-art recognition approaches for infrared pedestrian targets.

컨볼루션 신경망을 이용한 CCTV 영상 기반의 성별구분 (CCTV Based Gender Classification Using a Convolutional Neural Networks)

  • 강현곤;박장식;송종관;윤병우
    • 한국멀티미디어학회논문지
    • /
    • 제19권12호
    • /
    • pp.1943-1950
    • /
    • 2016
  • Recently, gender classification has attracted a great deal of attention in the field of video surveillance system. It can be useful in many applications such as detecting crimes for women and business intelligence. In this paper, we proposed a method which can detect pedestrians from CCTV video and classify the gender of the detected objects. So far, many algorithms have been proposed to classify people according the their gender. This paper presents a gender classification using convolutional neural network. The detection phase is performed by AdaBoost algorithm based on Haar-like features and LBP features. Classifier and detector is trained with data-sets generated form CCTV images. The experimental results of the proposed method is male matching rate of 89.9% and the results shows 90.7% of female videos. As results of simulations, it is shown that the proposed gender classification is better than conventional classification algorithm.

Boosting the Face Recognition Performance of Ensemble Based LDA for Pose, Non-uniform Illuminations, and Low-Resolution Images

  • Haq, Mahmood Ul;Shahzad, Aamir;Mahmood, Zahid;Shah, Ayaz Ali;Muhammad, Nazeer;Akram, Tallha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3144-3164
    • /
    • 2019
  • Face recognition systems have several potential applications, such as security and biometric access control. Ongoing research is focused to develop a robust face recognition algorithm that can mimic the human vision system. Face pose, non-uniform illuminations, and low-resolution are main factors that influence the performance of face recognition algorithms. This paper proposes a novel method to handle the aforementioned aspects. Proposed face recognition algorithm initially uses 68 points to locate a face in the input image and later partially uses the PCA to extract mean image. Meanwhile, the AdaBoost and the LDA are used to extract face features. In final stage, classic nearest centre classifier is used for face classification. Proposed method outperforms recent state-of-the-art face recognition algorithms by producing high recognition rate and yields much lower error rate for a very challenging situation, such as when only frontal ($0^{\circ}$) face sample is available in gallery and seven poses ($0^{\circ}$, ${\pm}30^{\circ}$, ${\pm}35^{\circ}$, and ${\pm}45^{\circ}$) as a probe on the LFW and the CMU Multi-PIE databases.

(2D)2 PCA알고리즘을 이용한 최적 RBFNNs 기반 나이트비전 얼굴인식 시뮬레이터 설계 (Design of Optimized RBFNNs based on Night Vision Face Recognition Simulator Using the 2D2 PCA Algorithm)

  • 장병희;김현기;오성권
    • 한국지능시스템학회논문지
    • /
    • 제24권1호
    • /
    • pp.1-6
    • /
    • 2014
  • 본 연구에서 $(2D)^2$ PCA 알고리즘을 이용한 최적 RBFNNs 기반 나이트비전 얼굴인식 시뮬레이터을 설계한다. CCD 카메라로 야간에 이미지를 취득할 경우 조도가 낮기 때문에 인식을 수행하기 어려운 수준의 이미지가 취득되는 문제점이 발생한다. 따라서 본 논문에서는 나이트 비전 카메라를 이용하여 야간 얼굴을 취득하였다. 또한 얼굴과 비얼굴 이미지 영역에서 야간 얼굴 이미지를 검출하기 위해 Ada-Boost 알고리즘을 사용한다. 그리고 히스토그램 평활화를 이용하여 이미지의 왜곡 현상을 최소화 한다. 이렇게 얻어진 고차원 이미지를 저차원으로 축소하기 위해 $(2D)^2$ PCA 알고리즘을 사용했다. 다항식 기반 RBFNNs을 이용한 지능형 패턴 분류 모델을 통하여 얼굴인식을 수행 한다. 마지막으로 차분진화 알고리즘을 사용하여 파라미터를 최적화 한다. $(2D)^2$ PCA를 최적 RBFNNs 기반 나이트비전 얼굴인식 시스템의 성능 평가를 위하여 IC&CI Lab data를 사용하고 실제 얼굴 인식 시스템을 설계한다.

깊이정보를 이용한 케스케이드 방식의 실시간 손 영역 검출 (Real-time Hand Region Detection based on Cascade using Depth Information)

  • 주성일;원선희;최형일
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제2권10호
    • /
    • pp.713-722
    • /
    • 2013
  • 본 논문에서는 깊이정보를 이용하여 케스케이드 방식에 기반한 실시간 손 영역 검출 방법을 제안한다. 실험 환경 조명 조건의 변화로부터 빠르고 안정적으로 손 영역을 검출하기 위해 깊이정보만을 이용한 특징을 제안하며, 부스팅과 케스케이드 방법을 이용한 분류기를 통해 손 영역 검출 방법을 제안한다. 먼저, 깊이정보만을 이용한 특징을 추출하기 위해 입력영상의 중심 깊이 값과 분할된 블록의 평균 깊이 값의 차이를 계산하고, 모든 크기의 손 영역 검출을 위해 중심 깊이 값과 2차 선형 모델을 이용하여 손 영역의 크기를 예측한다. 그리고 손 영역으로부터의 특징 추출을 통한 학습 및 인식을 위해 케스케이드 방식을 적용한다. 본 논문에서 제안한 분류기는 정확도를 유지하고 속도를 향상시키기 위하여 각 스테이지를 한 개의 약분류기로 구성하고 검출율을 만족하면서 오류율이 가장 낮은 임계값을 구하여 과적합 학습을 수행한다. 학습된 분류기를 이용하여 손 영역을 분류하고, 병합단계를 통해 최종 손 영역을 검출한다. 마지막으로 성능 검증을 위해 기존의 다양한 아다부스트와 정량적, 정성적 비교 분석을 통해 제안하는 손 영역 검출 알고리즘의 효율성을 입증한다.

폐암 생존율 향상을 위한 아다부스트 학습 기반의 컴퓨터보조 진단방법에 관한 연구 (Study of Computer Aided Diagnosis for the Improvement of Survival Rate of Lung Cancer based on Adaboost Learning)

  • 원철호
    • 재활복지공학회논문지
    • /
    • 제10권1호
    • /
    • pp.87-92
    • /
    • 2016
  • 본 논문에는 관심 영역의 폐실질 영역을 양성과 악성 결절의 분류를 위한 특징인자에 포함으로써 분류성능을 개선하였다. CT를 통해 확인되는 매우 작은 폐결절(4~10mm)은 고형 종양 내에 CT 데이터 복셀 수가 제한되어 기존 컴퓨터보조 진단도구를 통해 처리하기가 어렵다. 이러한 아주 작은 폐 결절의 경우 분석을 위해 주변의 실질을 포함하여 특징인자를 추출하는 것이 CT 복셀 세트를 증가시킬 수 있으며, CT 스캐너와 매개 변수에 대한 컴퓨터 보조진단도구의 유연성을 확보함으로써 진단 성능을 개선할 수 있다. 나이브 베이스와 SVM 약분류기를 이용하는 아다부스트 학습을 통해 304개의 특징인자로부터 유효한 특징인자를 결정하였으며, 제안한 방법을 COPDGene 데이터에 적용한 결과 100%의 정확도, 민감도 및 특이도의 결과를 획득하여 컴퓨터 보조진단에 유용하게 사용될 수 있음을 보였다.