• Title/Summary/Keyword: 머리 추적

Search Result 105, Processing Time 0.031 seconds

Illumination Robust Extraction of Facial Region including Hair Method (조명에 강인한 머리카락을 포함한 얼굴 영역 추출 방법)

  • Park, Sung-Soo;Lee, Hyung-Soo;Kim, Dai-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.415-418
    • /
    • 2007
  • 본 논문은 머리카락을 포함한 얼굴 영역 추출에 관한 것으로서, 보다 구체적으로는 조명변화에도 강인 한 얼굴영역 추출방법과 다양한 머리카락의 모양과 색의 변화에도 신뢰성 있는 머리카락 추출 방법에 관한 것이다. 일반적으로 얼굴영상은 개인의 특징을 잘 표현할 수 있는 정보로써, 영상에서 얼굴 영역을 추출하여 이를 실제 얼굴영상정보를 이용한 얼굴인식, 관상정보 서비스를 위한 전처리, 기반기술을 제공하고, 실사 캐릭터 제작에도 바로 적용될 수 있다. 기존의 템플리트 매칭, 곡선추적 알고리즘 등과의 같은 추출방법에서는 얼굴크기 변화, 안경 및 장신구의 착용 여부 그리고 조명의 변화에 따라 얼굴영역 추출하는 처리속도가 많이 걸리고, 성능이 크게 저하되는 문제점이 있다. 상기한 바와 같이 종래의 문제점을 개선하기 위하여, 본 논문에서는 얼굴의 크기변화, 안경 및 장신구의 착용 여부 그리고 조명의 변화에서도 얼굴 영역을 잘 추출 할 있는 방법과 다양한 머리카락의 색, 형태 변화에도 신뢰성 있는 머리카락 추출방법을 제안하였다.

  • PDF

Head Detection based on Foreground Pixel Histogram Analysis (전경픽셀 히스토그램 분석 기반의 머리영역 검출 기법)

  • Choi, Yoo-Joo;Son, Hyang-Kyoung;Park, Jung-Min;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.11
    • /
    • pp.179-186
    • /
    • 2009
  • In this paper, we propose a head detection method based on vertical and horizontal pixel histogram analysis in order to overcome drawbacks of the previous head detection approach using Haar-like feature-based face detection. In the proposed method, we create the vertical and horizontal foreground pixel histogram images from the background subtraction image, which represent the number of foreground pixels in the same vertical or horizontal position. Then we extract feature points of a head region by applying Harris corner detection method to the foreground pixel histogram images and by analyzing corner points. The proposal method shows robust head detection results even in the face image covering forelock by hairs or the back view image in which the previous approaches cannot detect the head regions.

Design of the Vision Based Head Tracker Using Area of Artificial Mark (인공표식의 면적을 이용하는 영상 기반 헤드 트랙커 설계)

  • 김종훈;이대우;조겸래
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.7
    • /
    • pp.63-70
    • /
    • 2006
  • This paper describes research of using area of artificial mark on vision based head tracker system. A head tracker system consists of the translational and rotational motions which are detected by web camera. Results of the motion are taken from image processing and neural network. Because of the characteristics of cockpit, the specific color on the helmet is tracked for translational motion. And rotational motion is tracked via neural network. Ratio of two different colored area on the helmet is used as input of network. Neural network algorithms used, such as back-propagation and RBFN (Radial Basis Function Network). Both back-propagation using a characteristic of feedback and RBFN using a characteristic of statistics have a good performances for the tracking of nonlinear system such as a head motion. Finally, this paper analyzes and compares with tracking performance.

Human-Computer Interface using the Eye-Gaze Direction (눈의 응시 방향을 이용한 인간-컴퓨터간의 인터페이스에 관한 연구)

  • Kim, Do-Hyoung;Kim, Jea-Hean;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.6
    • /
    • pp.46-56
    • /
    • 2001
  • In this paper we propose an efficient approach for real-time eye-gaze tracking from image sequence and magnetic sensory information. The inputs to the eye-gaze tracking system are images taken by a camera and data from a magnetic sensor. The measuring data are sufficient to describe the eye and head movement, because the camera and the receiver of a magnetic sensor are stationary with respect to the head. Experimental result shows the validity of real time application aspect of the proposed system and also shows the feasibility of the system as using a new computer interface instead of the mouse.

  • PDF

Development of Measurement Method and Contents for Unilateral Neglect using Eye-tracking Technique (시선추적기법을 적용한 편측무시 측정 방법 및 개선 콘텐츠 개발)

  • Choi, Junghee;Shin, Sung-Wook;Moon, Ho-Sang;Goo, Sejin;Chung, Sung-Taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.187-195
    • /
    • 2018
  • In this study, using real-time gaze tracking and head tracking method, we intended to quantitatively evaluate the deviation between the patient's head and gaze direction while minimizing inspection errors due to apraxia of conventional paper-based examination respectively. As a result, we developed a software that can quantitatively measure gaze and head movement information, and computerized the line bisection and star cancelation test, which are generally used as conventional paper test. In addition, for the rehabilitation training, contents corresponding to the visual technology of Warren's visual hierarchical model lower level are implemented and can be performed repetitively and independently. This allows the patient to actively participate in rehabilitation and quantitatively compare the degree of improvement.

2D Human Pose Estimation Using Component-Based Density Propagation (구성요소 기반 확률 전파를 이용한 2D 사람 자세 추정)

  • Cha, Eun-Mi;Lee, Kyoung-Mi
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.725-730
    • /
    • 2007
  • 본 논문에서는 인체 추적에 필요한 인체의 각 부위들을 구성요소로 각각 검출하여 연결하는 인체 모델을 통해 각 구성요소를 개별적으로 추정하게 된다. 여기서 인체의 구성요소 중 동작 추적에 가장 필요한 6개 부위로 구성된 구성요소인 머리, 몸통, 왼팔, 오른팔, 왼발, 오른발 등을 검출하여 추적한 후, 각 구성요소의 중심값과 색상정보를 이용하여 이전 프레임과 현재 프레임 간에 연결성을 두여 각 구성요소를 개별적으로 확률 전파를 통해 추적되어지고, 각 구성요소의 추적 결과는 구성요소들의 추정 결과를 구성요소 기반 확률 전파를 이용하여 인체의 동작을 추정하는 방법을 제안한다. 입력 영상에서 피부색 등의 색상 정보를 이용하여 인체 부위 또는 인체 모델의 구성 요소들 각각의 중심값과 색상정보를 가지고 확률전파를 통해 이것이 어떤 동작인지 동작 추정이 가능하다. 본 논문에서 제안하는 인체 동작 추적 시스템은 유아의 동작교육에 이용되는 7가지 동작인 걷기, 뛰기, 앙감질, 구부리기, 뻗기, 균형 잡기, 회전하기 등에 적용하였다. 본 논문에서 제안한 인체 모델의 각 구성요소 부위들을 독립적으로 검출하여 평균 96%의 높은 인식률을 나타냈고, 앞서 적용한 7가지 동작에 대해서 실험한 결과 평균 88.5% 성공률을 획득함으로써 본 논문에서 제안한 방법의 타당성을 보였다.

  • PDF

A Real-Time Head Tracking Algorithm Using Mean-Shift Color Convergence and Shape Based Refinement (Mean-Shift의 색 수렴성과 모양 기반의 재조정을 이용한 실시간 머리 추적 알고리즘)

  • Jeong Dong-Gil;Kang Dong-Goo;Yang Yu Kyung;Ra Jong Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.1-8
    • /
    • 2005
  • In this paper, we propose a two-stage head tracking algorithm adequate for real-time active camera system having pan-tilt-zoom functions. In the color convergence stage, we first assume that the shape of a head is an ellipse and its model color histogram is acquired in advance. Then, the min-shift method is applied to roughly estimate a target position by examining the histogram similarity of the model and a candidate ellipse. To reflect the temporal change of object color and enhance the reliability of mean-shift based tracking, the target histogram obtained in the previous frame is considered to update the model histogram. In the updating process, to alleviate error-accumulation due to outliers in the target ellipse of the previous frame, the target histogram in the previous frame is obtained within an ellipse adaptively shrunken on the basis of the model histogram. In addition, to enhance tracking reliability further, we set the initial position closer to the true position by compensating the global motion, which is rapidly estimated on the basis of two 1-D projection datasets. In the subsequent stage, we refine the position and size of the ellipse obtained in the first stage by using shape information. Here, we define a robust shape-similarity function based on the gradient direction. Extensive experimental results proved that the proposed algorithm performs head hacking well, even when a person moves fast, the head size changes drastically, or the background has many clusters and distracting colors. Also, the propose algorithm can perform tracking with the processing speed of about 30 fps on a standard PC.

Face Tracking Combining Active Contour Model and Color-Based Particle Filter (능동적 윤곽 모델과 색상 기반 파티클 필터를 결합한 얼굴 추적)

  • Kim, Jin-Yul;Jeong, Jae-Ki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.10
    • /
    • pp.2090-2101
    • /
    • 2015
  • We propose a robust tracking method that combines the merits of ACM(active contour model) and the color-based PF(particle filter), effectively. In the proposed method, PF and ACM track the color distribution and the contour of the target, respectively, and Decision part merges the estimate results from the two trackers to determine the position and scale of the target and to update the target model. By controlling the internal energy of ACM based on the estimate of the position and scale from PF tracker, we can prevent the snake pointers from falsely converging to the background clutters. We appled the proposed method to track the head of person in video and have conducted computer experiments to analyze the errors of the estimated position and scale.

Robust Extraction of Heartbeat Signals from Mobile Facial Videos (모바일 얼굴 비디오로부터 심박 신호의 강건한 추출)

  • Lomaliza, Jean-Pierre;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.1
    • /
    • pp.51-56
    • /
    • 2019
  • This paper proposes an improved heartbeat signal extraction method for ballistocardiography(BCG)-based heart-rate measurement on mobile environment. First, from a mobile facial video, a handshake-free head motion signal is extracted by tracking facial features and background features at the same time. Then, a novel signal periodicity computation method is proposed to accurately separate out the heartbeat signal from the head motion signal. The proposed method could robustly extract heartbeat signals from mobile facial videos, and enabled more accurate heart rate measurement (measurement errors were reduced by 3-4 bpm) compared to the existing method.

A real-time robust body-part tracking system for intelligent environment (지능형 환경을 위한 실시간 신체 부위 추적 시스템 -조명 및 복장 변화에 강인한 신체 부위 추적 시스템-)

  • Jung, Jin-Ki;Cho, Kyu-Sung;Choi, Jin;Yang, Hyun S.
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.411-417
    • /
    • 2009
  • We proposed a robust body part tracking system for intelligent environment that will not limit freedom of users. Unlike any previous gesture recognizer, we upgraded the generality of the system by creating the ability the ability to recognize details, such as, the ability to detect the difference between long sleeves and short sleeves. For the precise each body part tracking, we obtained the image of hands, head, and feet separately from a single camera, and when detecting each body part, we separately chose the appropriate feature for certain parts. Using a calibrated camera, we transferred 2D detected body parts into the 3D posture. In the experimentation, this system showed advanced hand tracking performance in real time(50fps).

  • PDF