• Title/Summary/Keyword: illumination estimation

Search Result 113, Processing Time 0.026 seconds

Improved Block-based Background Modeling Using Adaptive Parameter Estimation (적응적 파라미터 추정을 통한 향상된 블록 기반 배경 모델링)

  • Kim, Hanj-Jun;Lee, Young-Hyun;Song, Tae-Yup;Ku, Bon-Hwa;Ko, Han-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.73-81
    • /
    • 2011
  • In this paper, an improved block-based background modeling technique using adaptive parameter estimation that judiciously adjusts the number of model histograms at each frame sequence is proposed. The conventional block-based background modeling method has a fixed number of background model histograms, resulting to false negatives when the image sequence has either rapid illumination changes or swiftly moving objects, and to false positives with motionless objects. In addition, the number of optimal model histogram that changes each type of input image must have found manually. We demonstrate the proposed method is promising through representative performance evaluations including the background modeling in an elevator environment that may have situations with rapid illumination changes, moving objects, and motionless objects.

Visual Voice Activity Detection and Adaptive Threshold Estimation for Speech Recognition (음성인식기 성능 향상을 위한 영상기반 음성구간 검출 및 적응적 문턱값 추정)

  • Song, Taeyup;Lee, Kyungsun;Kim, Sung Soo;Lee, Jae-Won;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.321-327
    • /
    • 2015
  • In this paper, we propose an algorithm for achieving robust Visual Voice Activity Detection (VVAD) for enhanced speech recognition. In conventional VVAD algorithms, the motion of lip region is found by applying an optical flow or Chaos inspired measures for detecting visual speech frames. The optical flow-based VVAD is difficult to be adopted to driving scenarios due to its computational complexity. While invariant to illumination changes, Chaos theory based VVAD method is sensitive to motion translations caused by driver's head movements. The proposed Local Variance Histogram (LVH) is robust to the pixel intensity changes from both illumination change and translation change. Hence, for improved performance in environmental changes, we adopt the novel threshold estimation using total variance change. In the experimental results, the proposed VVAD algorithm achieves robustness in various driving situations.

Storage Life Estimation of Magnesium Flare Material for 81 mm Illuminating Projectile (81 mm 조명탄용 마그네슘계 조명제 저장수명 예측)

  • Back, Seungjun;Son, Youngkap;Lim, Sunghwan;Myung, Inho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.267-274
    • /
    • 2015
  • It is necessary to both analyze root-cause of non-conformance of effective illumination time to the specification, and estimate the storage lifetime for 81 mm illuminating projectile stockpiled over 10 years. In this paper, aging mechanism of magnesium flare material due to long-term storage was supposed, and two-stage tests, pre-test and main test based on accelerated degradation tests were performed. Field storage environment of moistureproof was set up, and illumination times in the accelerated degradation tests for temperatures 60 and $70^{\circ}C$ were measured. Then, storage reliability of the projectile was estimated through analyzing the measured data and applying distribution-based degradation models to the data. The $B_{10}$ life by which 10 % of a population of the projectiles will have failed at storage temperature of $25^{\circ}C$ was estimated about 7 years.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Motion Field Estimation Using U-Disparity Map in Vehicle Environment

  • Seo, Seung-Woo;Lee, Gyu-Cheol;Yoo, Ji-Sang
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.428-435
    • /
    • 2017
  • In this paper, we propose a novel motion field estimation algorithm for which a U-disparity map and forward-and-backward error removal are applied in a vehicular environment. Generally, a motion exists in an image obtained by a camera attached to a vehicle by vehicle movement; however, the obtained motion vector is inaccurate because of the surrounding environmental factors such as the illumination changes and vehicles shaking. It is, therefore, difficult to extract an accurate motion vector, especially on the road surface, due to the similarity of the adjacent-pixel values; therefore, the proposed algorithm first removes the road surface region in the obtained image by using a U-disparity map, and uses then the optical flow that represents the motion vector of the object in the remaining part of the image. The algorithm also uses a forward-backward error-removal technique to improve the motion-vector accuracy and a vehicle's movement is predicted through the application of the RANSAC (RANdom SAmple Consensus) to the previously obtained motion vectors, resulting in the generation of a motion field. Through experiment results, we show that the performance of the proposed algorithm is superior to that of an existing algorithm.

A Multi-Stage Convolution Machine with Scaling and Dilation for Human Pose Estimation

  • Nie, Yali;Lee, Jaehwan;Yoon, Sook;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3182-3198
    • /
    • 2019
  • Vision-based Human Pose Estimation has been considered as one of challenging research subjects due to problems including confounding background clutter, diversity of human appearances and illumination changes in scenes. To tackle these problems, we propose to use a new multi-stage convolution machine for estimating human pose. To provide better heatmap prediction of body joints, the proposed machine repeatedly produces multiple predictions according to stages with receptive field large enough for learning the long-range spatial relationship. And stages are composed of various modules according to their strategic purposes. Pyramid stacking module and dilation module are used to handle problem of human pose at multiple scales. Their multi-scale information from different receptive fields are fused with concatenation, which can catch more contextual information from different features. And spatial and channel information of a given input are converted to gating factors by squeezing the feature maps to a single numeric value based on its importance in order to give each of the network channels different weights. Compared with other ConvNet-based architectures, we demonstrated that our proposed architecture achieved higher accuracy on experiments using standard benchmarks of LSP and MPII pose datasets.

Detector Evaluation Scheme Including the Concept of Confidence Interval in Statistics (통계적 신뢰구간 개념을 도입한 검지기 성능평가)

  • Jang, Jin-Hwan;Kim, Byung-Hwa
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.1
    • /
    • pp.67-75
    • /
    • 2011
  • This paper presents a new test technique for evaluating performance of vehicle detectors with interval estimation, not the conventional point estimation, for presenting statistical confidence interval. The methodology is categorized into three parts; sampling plan, analysis on the characteristic of evaluation indices, and the expression of evaluation results. Even though many statistical sampling plans exist, stratified random sampling is regarded as the most appropriate one, considering the detector performance characteristics that varies with traffic, illumination, and meteorological conditions. No magic bullet exists for evaluation index for detector evaluation, hence the characteristics of evaluation indices were thoroughly analyzed and a reasonable process for choosing the best evaluation index is proposed. Finally, the methodology to express the result of detector evaluation for the entire evaluation period and individual analysis interval is represented, respectively. To overcome the existing drawbacks in point estimation, interval estimation by which statistical confidence interval can be represented is introduced for enhancing statistical reliability of traffic detector evaluation. This research can make vehicle detector scheme improve one step forward.

Color cast detection based on color by correlation and color constancy algorithm using kernel density estimation (색 상관 관계 기반의 색조 검출 및 핵밀도 추정을 이용한 색 항상성 알고리즘)

  • Jung, Jun-Woo;Kim, Gyeong-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.4
    • /
    • pp.535-546
    • /
    • 2010
  • Digital images have undesired color casts due to various illumination conditions and intrinsic characteristics of cameras. Since the color casts in the images deteriorate performance of color representations, color correction is required for further analysis of images. In this paper, an algorithm for detection and removal of color casts is presented. The proposed algorithm consists of four steps: retrieving similar image using color by correlation, extraction of near neutral color regions, kernel density estimation, and removal of color casts. Ambiguities in near neutral color regions are excluded based on kernel density estimation by the color by correlation algorithm. The method determines whether there are color casts by chromaticity distributions in near neutral color regions, and removes color casts for color constancy. Experimental results suggest that the proposed method outperforms the gray world algorithm and the color by correlation algorithm.

Webcam-Based 2D Eye Gaze Estimation System By Means of Binary Deformable Eyeball Templates

  • Kim, Jin-Woo
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.5
    • /
    • pp.575-580
    • /
    • 2010
  • Eye gaze as a form of input was primarily developed for users who are unable to use usual interaction devices such as keyboard and the mouse; however, with the increasing accuracy in eye gaze detection with decreasing cost of development, it tends to be a practical interaction method for able-bodied users in soon future as well. This paper explores a low-cost, robust, rotation and illumination independent eye gaze system for gaze enhanced user interfaces. We introduce two brand-new algorithms for fast and sub-pixel precise pupil center detection and 2D Eye Gaze estimation by means of deformable template matching methodology. In this paper, we propose a new algorithm based on the deformable angular integral search algorithm based on minimum intensity value to localize eyeball (iris outer boundary) in gray scale eye region images. Basically, it finds the center of the pupil in order to use it in our second proposed algorithm which is about 2D eye gaze tracking. First, we detect the eye regions by means of Intel OpenCV AdaBoost Haar cascade classifiers and assign the approximate size of eyeball depending on the eye region size. Secondly, using DAISMI (Deformable Angular Integral Search by Minimum Intensity) algorithm, pupil center is detected. Then, by using the percentage of black pixels over eyeball circle area, we convert the image into binary (Black and white color) for being used in the next part: DTBGE (Deformable Template based 2D Gaze Estimation) algorithm. Finally, using DTBGE algorithm, initial pupil center coordinates are assigned and DTBGE creates new pupil center coordinates and estimates the final gaze directions and eyeball size. We have performed extensive experiments and achieved very encouraging results. Finally, we discuss the effectiveness of the proposed method through several experimental results.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.2
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF