• Title/Summary/Keyword: 색상 기반 추적 알고리즘

Search Result 36, Processing Time 0.023 seconds

Design of Real-time MR Contents using Substitute Videos of Vehicles and Background based on Black Box Video (블랙박스 영상 기반 차량 및 배경 대체 영상을 이용한 실시간 MR 콘텐츠의 설계)

  • Kim, Sung-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.6
    • /
    • pp.213-218
    • /
    • 2021
  • In this paper, we detect and track vehicles by type based on highway daytime driving videos taken with black boxes for vehicles. In addition, we design a real-time MR contents production method that can be newly created by placing substitute videos of each type of detected vehicles in the same location as the new background video. To detect and track vehicles by type, we use the YOLO algorithm. And we also use the mask technique based on RGB color for substitute videos of each type of vehicles detected. The size of the vehicle substitute videos to be used for MR content are substituted by the same size as the area size of the detected vehicles. In this paper, we confirm that real-time MR contents design is possible as a result of experiments and simulations and believe that It will be usefully utilized in the field of VR contents.

Design and Implementation of Eye-Gaze Estimation Algorithm based on Extraction of Eye Contour and Pupil Region (눈 윤곽선과 눈동자 영역 추출 기반 시선 추정 알고리즘의 설계 및 구현)

  • Yum, Hyosub;Hong, Min;Choi, Yoo-Joo
    • The Journal of Korean Association of Computer Education
    • /
    • v.17 no.2
    • /
    • pp.107-113
    • /
    • 2014
  • In this study, we design and implement an eye-gaze estimation system based on the extraction of eye contour and pupil region. In order to effectively extract the contour of the eye and region of pupil, the face candidate regions were extracted first. For the detection of face, YCbCr value range for normal Asian face color was defined by the pre-study of the Asian face images. The biggest skin color region was defined as a face candidate region and the eye regions were extracted by applying the contour and color feature analysis method to the upper 50% region of the face candidate region. The detected eye region was divided into three segments and the pupil pixels in each pupil segment were counted. The eye-gaze was determined into one of three directions, that is, left, center, and right, by the number of pupil pixels in three segments. In the experiments using 5,616 images of 20 test subjects, the eye-gaze was estimated with about 91 percent accuracy.

  • PDF

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.

Estimation of a Driver's Physical Condition Using Real-time Vision System (실시간 비전 시스템을 이용한 운전자 신체적 상태 추정)

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Moon, Chan-Woo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.213-224
    • /
    • 2009
  • This paper presents a new algorithm for estimating a driver's physical condition using real-time vision system and performs experimentation for real facial image data. The system relies on a face recognition to robustly track the center points and sizes of person's two pupils, and two side edge points of the mouth. The face recognition constitutes the color statistics by YUV color space together with geometrical model of a typical face. The system can classify the rotation in all viewing directions, to detect eye/mouth occlusion, eye blinking and eye closure, and to recover the three dimensional gaze of the eyes. These are utilized to determine the carelessness and drowsiness of the driver. Finally, experimental results have demonstrated the validity and the applicability of the proposed method for the estimation of a driver's physical condition.

  • PDF

Extraction of Skin Regions through Filtering-based Noise Removal (필터링 기반의 잡음 제거를 통한 피부 영역의 추출)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.672-678
    • /
    • 2020
  • Ultra-high-speed images that accurately depict the minute movements of objects have become common as low-cost and high-performance cameras that can film at high speeds have emerged. In this paper, the proposed method removes unexpected noise contained in images after input at high speed, and then extracts an area of interest that can represent personal information, such as skin areas, from the image in which noise has been removed. In this paper, noise generated by abnormal electrical signals is removed by applying bilateral filters. A color model created through pre-learning is then used to extract the area of interest that represents the personal information contained within the image. Experimental results show that the introduced algorithms remove noise from high-speed images and then extract the area of interest robustly. The approach presented in this paper is expected to be useful in various applications related to computer vision, such as image preprocessing, noise elimination, tracking and monitoring of target areas, etc.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.