• Title/Summary/Keyword: AdaBoost algorithm

Search Result 114, Processing Time 0.028 seconds

HW/SW Co-design of a Visual Driver Drowsiness Detection System

  • Lai, Kok Choong;Wong, M.L. Dennis;Islam, Syed Zahidul
    • Journal of Convergence Society for SMB
    • /
    • v.3 no.1
    • /
    • pp.31-41
    • /
    • 2013
  • There have been various recent methods proposed in detecting driver drowsiness (DD) to avert fatal accidents. This work proposes a hardware/software (HW/SW) co-design approach in implementation of a DD detection system adapted from an AdaBoost-based object detection algorithm with Haar-like features [1] to monitor driver's eye closure rate. In this work, critical functions of the DD detection algorithm is accelerated through custom hardware components in order to speed up processing, while the software component implements the overall control and logical operations to achieve the complete functionality required of the DD detection algorithm. The HW/SW architecture was implemented on an Altera DE2 board with a video daughter board. Performance of the proposed implementation was evaluated and benchmarked against some recent works.

  • PDF

Efficient Face Detection Algorithm using Depth and Color Information (영상의 깊이 정보와 컬러 정보를 이용한 효율적인 얼굴 검출 알고리듬)

  • Bae, Yun-Jin;Choi, Hyun-Jun;Seo, Young-Ho;Yoo, Ji Sang;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.230-232
    • /
    • 2011
  • Viola와 Jine가 제안한 AdaBoost를 이용한 얼굴 검출 알고리즘은 빠른 얼굴 검출 속도와 뛰어난 성능으로 인해 최근 여러분야에서 널리 사용되고 있는 알고리즘 중 하나이다. 하지만 AdaBoost를 이용하여 얼굴을 검출함에 있어 오검출이 존재하며, 이를 줄이기 위해서는 많은 연산이 요구되며, 실시간 얼굴 검출이 필요한 분야에 적용되기에는 속도 면에서 단점으로 작용한다. 기존의 Adaboost의 얼굴 검출기는 그레이스케일 영상만을 사용하므로, 영상의 컬러 정보와 부가적인 정보를 사용하면 더 적은 연산으로 오검출률을 감소시킬 수 있고, 올바른 얼굴을 검출이 된 다음 추적 알고리즘에 적용을 시키면 동영상으로 입력되는 영상에 대해 실시간으로 얼굴을 검출 할 수 있게 된다. 본 논문에서는 얼굴 추적을 위한 사전단계로 컬러 정보와 부가적인 정보로 깊이 정보를 사용하여 얼굴을 효율적으로 검출하는 알고리즘을 제안한다.

  • PDF

Speed Sign Recognition Using Sequential Cascade AdaBoost Classifier with Color Features

  • Kwon, Oh-Seol
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.185-190
    • /
    • 2019
  • For future autonomous cars, it is necessary to recognize various surrounding environments such as lanes, traffic lights, and vehicles. This paper presents a method of speed sign recognition from a single image in automatic driving assistance systems. The detection step with the proposed method emphasizes the color attributes in modified YUV color space because speed sign area is affected by color. The proposed method is further improved by extracting the digits from the highlighted circle region. A sequential cascade AdaBoost classifier is then used in the recognition step for real-time processing. Experimental results show the performance of the proposed algorithm is superior to that of conventional algorithms for various speed signs and real-world conditions.

New Rectangle Feature Type Selection for Real-time Facial Expression Recognition (실시간 얼굴 표정 인식을 위한 새로운 사각 특징 형태 선택기법)

  • Kim Do Hyoung;An Kwang Ho;Chung Myung Jin;Jung Sung Uk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.2
    • /
    • pp.130-137
    • /
    • 2006
  • In this paper, we propose a method of selecting new types of rectangle features that are suitable for facial expression recognition. The basic concept in this paper is similar to Viola's approach, which is used for face detection. Instead of previous Haar-like features we choose rectangle features for facial expression recognition among all possible rectangle types in a 3${\times}$3 matrix form using the AdaBoost algorithm. The facial expression recognition system constituted with the proposed rectangle features is also compared to that with previous rectangle features with regard to its capacity. The simulation and experimental results show that the proposed approach has better performance in facial expression recognition.

Subimage Detection of Window Image Using AdaBoost (AdaBoost를 이용한 윈도우 영상의 하위 영상 검출)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.578-589
    • /
    • 2014
  • Window image is displayed through a monitor screen when we execute the application programs on the computer. This includes webpage, video player and a number of applications. The webpage delivers a variety of information by various types in comparison with other application. Unlike a natural image captured from a camera, the window image like a webpage includes diverse components such as text, logo, icon, subimage and so on. Each component delivers various types of information to users. However, the components with different characteristic need to be divided locally, because text and image are served by various type. In this paper, we divide window images into many sub blocks, and classify each divided region into background, text and subimage. The detected subimages can be applied into 2D-to-3D conversion, image retrieval, image browsing and so forth. There are many subimage classification methods. In this paper, we utilize AdaBoost for verifying that the machine learning-based algorithm can be efficient for subimage detection. In the experiment, we showed that the subimage detection ratio is 93.4 % and false alarm is 13 %.

Context-aware Video Surveillance System

  • An, Tae-Ki;Kim, Moon-Hyun
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.1
    • /
    • pp.115-123
    • /
    • 2012
  • A video analysis system used to detect events in video streams generally has several processes, including object detection, object trajectories analysis, and recognition of the trajectories by comparison with an a priori trained model. However, these processes do not work well in a complex environment that has many occlusions, mirror effects, and/or shadow effects. We propose a new approach to a context-aware video surveillance system to detect predefined contexts in video streams. The proposed system consists of two modules: a feature extractor and a context recognizer. The feature extractor calculates the moving energy that represents the amount of moving objects in a video stream and the stationary energy that represents the amount of still objects in a video stream. We represent situations and events as motion changes and stationary energy in video streams. The context recognizer determines whether predefined contexts are included in video streams using the extracted moving and stationary energies from a feature extractor. To train each context model and recognize predefined contexts in video streams, we propose and use a new ensemble classifier based on the AdaBoost algorithm, DAdaBoost, which is one of the most famous ensemble classifier algorithms. Our proposed approach is expected to be a robust method in more complex environments that have a mirror effect and/or a shadow effect.

Driver Verification System Using Biometrical GMM Supervector Kernel (생체기반 GMM Supervector Kernel을 이용한 운전자검증 기술)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.3
    • /
    • pp.67-72
    • /
    • 2010
  • This paper presents biometrical driver verification system in car experiment through analysis of speech, and face information. We have used Mel-scale Frequency Cesptral Coefficients (MFCCs) for speaker verification using speech information. For face verification, face region is detected by AdaBoost algorithm and dimension-reduced feature vector is extracted by using principal component analysis only from face region. In this paper, we apply the extracted speech- and face feature vectors to an SVM kernel with Gaussian Mixture Models(GMM) supervector. The experimental results of the proposed approach show a clear improvement compared to a simple GMM or SVM approach.

Robust Detection of Body Areas Using an Adaboost Algorithm (에이다부스트 알고리즘을 이용한 인체 영역의 강인한 검출)

  • Jang, Seok-Woo;Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.403-409
    • /
    • 2016
  • Recently, harmful content (such as images and photos of nudes) has been widely distributed. Therefore, there have been various studies to detect and filter out such harmful image content. In this paper, we propose a new method using Haar-like features and an AdaBoost algorithm for robustly extracting navel areas in a color image. The suggested algorithm first detects the human nipples through color information, and obtains candidate navel areas with positional information from the extracted nipple areas. The method then selects real navel regions based on filtering using Haar-like features and an AdaBoost algorithm. Experimental results show that the suggested algorithm detects navel areas in color images 1.6 percent more robustly than an existing method. We expect that the suggested navel detection algorithm will be usefully utilized in many application areas related to 2D or 3D harmful content detection and filtering.

A Fast and Efficient Haar-Like Feature Selection Algorithm for Object Detection (객체검출을 위한 빠르고 효율적인 Haar-Like 피쳐 선택 알고리즘)

  • Chung, Byung Woo;Park, Ki-Yeong;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.6
    • /
    • pp.486-491
    • /
    • 2013
  • This paper proposes a fast and efficient Haar-like feature selection algorithm for training classifier used in object detection. Many features selected by Haar-like feature selection algorithm and existing AdaBoost algorithm are either similar in shape or overlapping due to considering only feature's error rate. The proposed algorithm calculates similarity of features by their shape and distance between features. Fast and efficient feature selection is made possible by removing selected features and features with high similarity from feature set. FERET face database is used to compare performance of classifiers trained by previous algorithm and proposed algorithm. Experimental results show improved performance comparing classifier trained by proposed method to classifier trained by previous method. When classifier is trained to show same performance, proposed method shows 20% reduction of features used in classification.