• 제목/요약/키워드: illumination estimation

검색결과 113건 처리시간 0.023초

적응적 파라미터 추정을 통한 향상된 블록 기반 배경 모델링 (Improved Block-based Background Modeling Using Adaptive Parameter Estimation)

  • 김한준;이영현;송태엽;구본화;고한석
    • 한국컴퓨터정보학회논문지
    • /
    • 제16권4호
    • /
    • pp.73-81
    • /
    • 2011
  • 본 논문에서는 모델 히스토그램 개수를 적응적으로 조절하는 블록기반의 배경 모델링 방법을 제안한다. 기존의 블록 기반의 배경 모델링 방법은 각 블록에 대한 모델 히스토그램의 개수를 고정한다. 따라서 조명변화와 움직이는 객체에 대해 오검출이 발생하는 문제가 있고 움직임이 없는 객체에 대해서는 검출이 되지 않는 문제가 있다. 또한 입력영상의 종류마다 달라질 수 있는 최적의 모델 히스토그램의 개수를 수동적으로 찾아야 하는 문제가 있다. 본 논문에서는 실험을 통해 엘리베이터 내에서 조명변화가 있고 객체가 움직이는 상황과 조명변화가 없고 객체가 정지해 있는 상황에 대해 기존의 방법과 성능을 비교하여 제안한 알고리즘의 효용성을 입증한다.

음성인식기 성능 향상을 위한 영상기반 음성구간 검출 및 적응적 문턱값 추정 (Visual Voice Activity Detection and Adaptive Threshold Estimation for Speech Recognition)

  • 송태엽;이경선;김성수;이재원;고한석
    • 한국음향학회지
    • /
    • 제34권4호
    • /
    • pp.321-327
    • /
    • 2015
  • 본 연구에서는 음성인식기 성능향상을 위한 영상기반 음성구간 검출방법을 제안한다. 기존의 광류기반 방법은 조도변화에 대응하지 못하고 연산량이 많아서 이동형 플렛홈에 적용되는 스마트 기기에 적용하는데 어려움이 있고, 카오스 이론 기반 방법은 조도변화에 강인하지만 차량 움직임 및 입술 검출의 부정확성으로 인해 발생하는 오검출이 발생하는 문제점이 있다. 본 연구에서는 기존 영상기반 음성구간 검출 알고리즘의 문제점을 해결하기 위해 지역 분산 히스토그램(Local Variance Histogram, LVH)과 적응적 문턱값 추정 방법을 이용한 음성구간 검출 알고리즘을 제안한다. 제안된 방법은 조도 변화에 따른 픽셀 변화에 강인하고 연산속도가 빠르며 적응적 문턱값을 사용하여 조도변화 및 움직임이 큰 차량 운전자의 발화를 강인하게 검출할 수 있다. 이동중인 차량에서 촬영한 운전자의 동영상을 이용하여 성능을 측정한 결과 제안한 방법이 기존의 방법에 비하여 성능이 우수함을 확인하였다.

81 mm 조명탄용 마그네슘계 조명제 저장수명 예측 (Storage Life Estimation of Magnesium Flare Material for 81 mm Illuminating Projectile)

  • 백승준;손영갑;임성환;명인호
    • 한국군사과학기술학회지
    • /
    • 제18권3호
    • /
    • pp.267-274
    • /
    • 2015
  • It is necessary to both analyze root-cause of non-conformance of effective illumination time to the specification, and estimate the storage lifetime for 81 mm illuminating projectile stockpiled over 10 years. In this paper, aging mechanism of magnesium flare material due to long-term storage was supposed, and two-stage tests, pre-test and main test based on accelerated degradation tests were performed. Field storage environment of moistureproof was set up, and illumination times in the accelerated degradation tests for temperatures 60 and $70^{\circ}C$ were measured. Then, storage reliability of the projectile was estimated through analyzing the measured data and applying distribution-based degradation models to the data. The $B_{10}$ life by which 10 % of a population of the projectiles will have failed at storage temperature of $25^{\circ}C$ was estimated about 7 years.

다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망 (Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks)

  • 안병태;최동걸;권인소
    • 로봇학회논문지
    • /
    • 제12권3호
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Motion Field Estimation Using U-Disparity Map in Vehicle Environment

  • Seo, Seung-Woo;Lee, Gyu-Cheol;Yoo, Ji-Sang
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권1호
    • /
    • pp.428-435
    • /
    • 2017
  • In this paper, we propose a novel motion field estimation algorithm for which a U-disparity map and forward-and-backward error removal are applied in a vehicular environment. Generally, a motion exists in an image obtained by a camera attached to a vehicle by vehicle movement; however, the obtained motion vector is inaccurate because of the surrounding environmental factors such as the illumination changes and vehicles shaking. It is, therefore, difficult to extract an accurate motion vector, especially on the road surface, due to the similarity of the adjacent-pixel values; therefore, the proposed algorithm first removes the road surface region in the obtained image by using a U-disparity map, and uses then the optical flow that represents the motion vector of the object in the remaining part of the image. The algorithm also uses a forward-backward error-removal technique to improve the motion-vector accuracy and a vehicle's movement is predicted through the application of the RANSAC (RANdom SAmple Consensus) to the previously obtained motion vectors, resulting in the generation of a motion field. Through experiment results, we show that the performance of the proposed algorithm is superior to that of an existing algorithm.

A Multi-Stage Convolution Machine with Scaling and Dilation for Human Pose Estimation

  • Nie, Yali;Lee, Jaehwan;Yoon, Sook;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3182-3198
    • /
    • 2019
  • Vision-based Human Pose Estimation has been considered as one of challenging research subjects due to problems including confounding background clutter, diversity of human appearances and illumination changes in scenes. To tackle these problems, we propose to use a new multi-stage convolution machine for estimating human pose. To provide better heatmap prediction of body joints, the proposed machine repeatedly produces multiple predictions according to stages with receptive field large enough for learning the long-range spatial relationship. And stages are composed of various modules according to their strategic purposes. Pyramid stacking module and dilation module are used to handle problem of human pose at multiple scales. Their multi-scale information from different receptive fields are fused with concatenation, which can catch more contextual information from different features. And spatial and channel information of a given input are converted to gating factors by squeezing the feature maps to a single numeric value based on its importance in order to give each of the network channels different weights. Compared with other ConvNet-based architectures, we demonstrated that our proposed architecture achieved higher accuracy on experiments using standard benchmarks of LSP and MPII pose datasets.

통계적 신뢰구간 개념을 도입한 검지기 성능평가 (Detector Evaluation Scheme Including the Concept of Confidence Interval in Statistics)

  • 장진환;김병화
    • 한국ITS학회 논문지
    • /
    • 제10권1호
    • /
    • pp.67-75
    • /
    • 2011
  • 본 논문은 기존의 단일값(점추정)으로 제시하던 검지기 성능평가 결과를 통계적 신뢰구간(구간추정)으로 제시하기 위한 검지기 성능평가 방안을 제시했다. 일반적으로 구간추정은 점추정에 비해 표본 통계의 더 많은 정보를 제공하기 때문에 기존 단일값으로 제시해 오던 검지기 성능평가 결과의 신뢰성을 향상시킬 수 있다. 방법론은 크게 표본 추출, 평가척도 분석, 평가결과 제시의 세 부분으로 나누어진다. 표본추출 방법에는 다양한 통계적 표본 추출 방법이 있지만 교통, 조도, 기상조건에 따라 변화하는 차량검지기 성능의 특성상 층화추출법이 통계적 신뢰구간 제시를 위한 가장 적합한 방법론으로 간주되었다. 또한 기존에 널리 사용된 검지기 성능평가 척도들의 특징을 면밀히 분석하여 평가자로 하여금 해당 검지자료에 적합한 평가척도를 선택할 수 있는 프로세스를 정립하였다. 마지막으로 평가기간 전체(예. 30분)와 개별분석 단위(예. 1분) 평가결과의 통계적 신뢰구간을 반영하기 위한 방법론을 제시했다. 본 연구는 기존 검지기 성능평가 결과의 단일값 제시로 인해 불가능 했던 신뢰구간 제시를 가능하게 함에 따라 검지기 성능평가 결과의 신뢰성을 향상시킬 수 있을 것으로 판단된다.

색 상관 관계 기반의 색조 검출 및 핵밀도 추정을 이용한 색 항상성 알고리즘 (Color cast detection based on color by correlation and color constancy algorithm using kernel density estimation)

  • 정준우;김경환
    • 한국멀티미디어학회논문지
    • /
    • 제13권4호
    • /
    • pp.535-546
    • /
    • 2010
  • 디지털 영상은 조명 조건과 취득 카메라의 고유 특성으로 인해 의도하지 않은 색조를 가질 수 있다. 영상에 색조가 존재하면 일관된 색 정보의 인지 및 표현이 어렵기 때문에 별도의 색 보정 작업이 필요하다. 본 논문은 color by correlation을 사용한 학습 영상 선택, 후보 회색축 영역의 추출, 핵밀도 추정, 색조 제거의 4단계로 이루어진 색조 추출 및 제거 방법을 제안한다. 후보 회색축 영역 중 불명확한 회색축 영역을 핵밀도 추정을 이용하여 제거하였다. 후보 회색축 영역의 색 성분의 분포를 조사하여 색조 유무를 판단하고, 색조가 존재할 경우 색조 제거 작업을 통하여 색 항상성을 유지 시켰다. 실험을 통해 제안하는 방법이 gray world 방법, color by correlation 방법 보다 정확한 색조 추정이 가능함을 확인하였다.

Webcam-Based 2D Eye Gaze Estimation System By Means of Binary Deformable Eyeball Templates

  • Kim, Jin-Woo
    • Journal of information and communication convergence engineering
    • /
    • 제8권5호
    • /
    • pp.575-580
    • /
    • 2010
  • Eye gaze as a form of input was primarily developed for users who are unable to use usual interaction devices such as keyboard and the mouse; however, with the increasing accuracy in eye gaze detection with decreasing cost of development, it tends to be a practical interaction method for able-bodied users in soon future as well. This paper explores a low-cost, robust, rotation and illumination independent eye gaze system for gaze enhanced user interfaces. We introduce two brand-new algorithms for fast and sub-pixel precise pupil center detection and 2D Eye Gaze estimation by means of deformable template matching methodology. In this paper, we propose a new algorithm based on the deformable angular integral search algorithm based on minimum intensity value to localize eyeball (iris outer boundary) in gray scale eye region images. Basically, it finds the center of the pupil in order to use it in our second proposed algorithm which is about 2D eye gaze tracking. First, we detect the eye regions by means of Intel OpenCV AdaBoost Haar cascade classifiers and assign the approximate size of eyeball depending on the eye region size. Secondly, using DAISMI (Deformable Angular Integral Search by Minimum Intensity) algorithm, pupil center is detected. Then, by using the percentage of black pixels over eyeball circle area, we convert the image into binary (Black and white color) for being used in the next part: DTBGE (Deformable Template based 2D Gaze Estimation) algorithm. Finally, using DTBGE algorithm, initial pupil center coordinates are assigned and DTBGE creates new pupil center coordinates and estimates the final gaze directions and eyeball size. We have performed extensive experiments and achieved very encouraging results. Finally, we discuss the effectiveness of the proposed method through several experimental results.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권2호
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF