• Title/Summary/Keyword: illumination estimation

Search Result 113, Processing Time 0.031 seconds

Lane Information Fusion Scheme using Multiple Lane Sensors (다중센서 기반 차선정보 시공간 융합기법)

  • Lee, Soomok;Park, Gikwang;Seo, Seung-woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.142-149
    • /
    • 2015
  • Most of the mono-camera based lane detection systems are fragile on poor illumination conditions. In order to compensate limitations of single sensor utilization, lane information fusion system using multiple lane sensors is an alternative to stabilize performance and guarantee high precision. However, conventional fusion schemes, which only concerns object detection, are inappropriate to apply to the lane information fusion. Even few studies considering lane information fusion have dealt with limited aids on back-up sensor or omitted cases of asynchronous multi-rate and coverage. In this paper, we propose a lane information fusion scheme utilizing multiple lane sensors with different coverage and cycle. The precise lane information fusion is achieved by the proposed fusion framework which considers individual ranging capability and processing time of diverse types of lane sensors. In addition, a novel lane estimation model is proposed to synchronize multi-rate sensors precisely by up-sampling spare lane information signals. Through quantitative vehicle-level experiments with around view monitoring system and frontal camera system, we demonstrate the robustness of the proposed lane fusion scheme.

Development of a real-time crop recognition system using a stereo camera

  • Baek, Seung-Min;Kim, Wan-Soo;Kim, Yong-Joo;Chung, Sun-Ok;Nam, Kyu-Chul;Lee, Dae Hyun
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.2
    • /
    • pp.315-326
    • /
    • 2020
  • In this study, a real-time crop recognition system was developed for an unmanned farm machine for upland farming. The crop recognition system was developed based on a stereo camera, and an image processing framework was proposed that consists of disparity matching, localization of crop area, and estimation of crop height with coordinate transformations. The performance was evaluated by attaching the crop recognition system to a tractor for five representative crops (cabbage, potato, sesame, radish, and soybean). The test condition was set at 3 levels of distances to the crop (100, 150, and 200 cm) and 5 levels of camera height (42, 44, 46, 48, and 50 cm). The mean relative error (MRE) was used to compare the height between the measured and estimated results. As a result, the MRE of Chinese cabbage was the lowest at 1.70%, and the MRE of soybean was the highest at 4.97%. It is considered that the MRE of the crop which has more similar distribution lower. the results showed that all crop height was estimated with less than 5% MRE. The developed crop recognition system can be applied to various agricultural machinery which enhances the accuracy of crop detection and its performance in various illumination conditions.

Real Time Eye and Gaze Tracking (실시간 눈과 시선 위치 추적)

  • 이영식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.477-483
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks(GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Futhermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

Real Time Eye and Gaze Tracking

  • Park Ho Sik;Nam Kee Hwan;Cho Hyeon Seob;Ra Sang Dong;Bae Cheol Soo
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.857-861
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

Antiblurry Dejitter Image Stabilization Method of Fuzzy Video for Driving Recorders

  • Xiong, Jing-Ying;Dai, Ming;Zhao, Chun-Lei;Wang, Ruo-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.3086-3103
    • /
    • 2017
  • Video images captured by vehicle cameras often contain blurry or dithering frames due to inadvertent motion from bumps in the road or by insufficient illumination during the morning or evening, which greatly reduces the perception of objects expression and recognition from the records. Therefore, a real-time electronic stabilization method to correct fuzzy video from driving recorders has been proposed. In the first stage of feature detection, a coarse-to-fine inspection policy and a scale nonlinear diffusion filter are proposed to provide more accurate keypoints. Second, a new antiblurry binary descriptor and a feature point selection strategy for unintentional estimation are proposed, which brought more discriminative power. In addition, a new evaluation criterion for affine region detectors is presented based on the percentage interval of repeatability. The experiments show that the proposed method exhibits improvement in detecting blurry corner points. Moreover, it improves the performance of the algorithm and guarantees high processing speed at the same time.

Sensibility Evaluation Model Research as to The Three-dimensional Surface Light Source set In The Interior (실내 3D 입체 면광원 조명연출에 관한 감성평가 모형 연구)

  • Lee, Jin-Sook;Park, Ji-Young;Jeong, Chan-Ung
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.29 no.6
    • /
    • pp.14-26
    • /
    • 2015
  • This study has been conducted so as to analyse user's sensibility on lighting method, correlated color temperature and illumination by composing surface light source, which was projected onto a unit side of interior wall, ceiling and floor. 1) As an analyzed results of the sensibility images, it showed that the "snug & tender" value had got higher when the correlated color temperature got lower. And the "energetic, cheerful" value had got higher when the level of illuminance got lower. Furthermore, the "unusual, unique" showed higher value on the illuminated floor circumstance. Finally, the higher correlated color temperature had been, "energetic, cheerful" value also got higher. 2) As a result of multi-regression analysis, it was found that 3000K and 100lx had the biggest influence on 'snug' image while 5,500K, 500lx had the biggest influence on 'energetic' image. In addition, it was found that the illuminated floor had a big influence on 'unusual' image while 500lx had the biggest influence on 'refined' image.

Position Estimation of the Welding Panels for Sub-assembly line in Shipbuilding by Vision System (시각 장치를 사용한 조선 소조립 라인에서의 용접부재 위치 인식)

  • 노영준;고국원;조형석;윤재웅;전자롬
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.04a
    • /
    • pp.719-723
    • /
    • 1997
  • The welding automation in ship manufacturing process,especially in the sub-assembly line is considered to be a difficult job because the welding part is too huge, various, unstructured for a welding robot to weld fully automatically. The weld orocess at the sub-assembly line for ship manufacturing is to joint the various stiffener on the base panel. In order to realize automatic robot weld in sub-assembly line, robot have to equip with the sensing system to recognize the position of the parts. In this research,we developed a vision system to detect the position of base panle for sub-assembly line is shipbuilding process. The vision system is composed of one CCD camera attached on the base of robot, 2-500W halogen lamps for active illumination. In the image processing algorithm,the base panel is represented by two set of lines located at its two corner through hough transform. However, the various noise line caused by highlight,scratches and stiffener,roller in conveyor, and so on is contained in the captured image, this nosie can be eliminated by region segmentation and threshold in hough transform domain. The matching process to recognize the position of weld panel is executed by finding patterns in the Hough transformed domain. The sets of experiments performed in the sub-assembly line show the effectiveness of the proposed algorithm.

  • PDF

Estimation of spectral Distribution of Illumination using Maximum achromatic Region (최대 무채색 영역을 이용한 광원의 분광분포 추정)

  • Kim, Hui Su;Kim, Yun Tae;Lee, Cheol Hui;Ha, Yeong Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.4
    • /
    • pp.68-68
    • /
    • 2001
  • 본 논문에서는 단일 영상에 포함된 광원의 분광분포를 추정하는 광원추정 알고리즘을 제안한다. 제안된 광원 추정 방법은 두 단계로 이루어져 있다. 첫째, 변형된 회색계 가정(modified gray world assumption)을 이용하여 부분적으로 광원의 영향을 배제한 후 밝으면서도 무채색에 가까운 최대 무채색 영역을 찾아 그 영역의 표면 분광 반사율을 구한다. 이때 최대 무채색 영역의 표면 분광 반사율은 1269개의 Munsell 색 표본에 대하여 주성분 분석 방법을 이용하여 추정하였다. 둘째, 주어진 Munsell 색 표본과 대표 광원의 조합으로 반사광의 모집단을 만들었다. 다음 최대 무채색 영역의 각 화소와 반사광 모집단과의 색차를 비교하여 최대 무채색 영역과 색차가 가장 적은 반사광 표본을 선택하고 이를 최대 무채색 영역에 대한 반사광의 분광분포로 정의한다. 최종적으로 최저 무채색 영역의 반사광 분광분포를 해당하는 표면 분광반사율로 나누어줌으로써 영상에 포함된 광원의 분광분포를 추정한다. 제안한 알고리듬의 성능을 평가하기 위하여 유색 광원에 조명된 영상에 대한 광원 추정 실험을 하였으며 기존의 방법과 추정된 광원의 분광 분포 비교 및 색차 비교를 통해 그 타당성을 검증하였다.

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.

Illumination Compensation Based on Conformity Assessment of Highlight Regions (고휘도 영역의 적합성 평가에 기반한 광원 보상)

  • Kwon, Oh-Seol
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.75-82
    • /
    • 2014
  • This paper proposes an illuminant compensation method using a camera noise analysis without segmentation in the dichromatic reflectance model. In general, pixels within highlight regions include large amounts of information on the image illuminant. Thus, the analysis of highlight regions provides a relatively easy means of determining the characteristics of an image illuminant. Currently, conventional methods require regional segmentation and the accuracy of this segmentation then affects the illuminant estimation. Therefore, the proposed method estimates the illuminant without segmentation based on a conformity assessment of highlight regions. Furthermore, error factors, such as noise and sensor non-uniformity, can be reduced by the conformity assessment.