• Title/Summary/Keyword: Affine Motion Estimation

Search Result 27, Processing Time 0.021 seconds

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Motion Activity Estimation for Mobile Interface Control (모바일 인터페이스 제어를 위한 움직임 추정 기법)

  • Lee, Chul-Woo;Kim, Chang-Su
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2008.11a
    • /
    • pp.135-138
    • /
    • 2008
  • 본 논문에서는 휴대폰이나 UMPC 등의 모바일 기기에 내장된 카메라를 이용하여 입력 영상을 통해 전역적인 움직임 벡터를 취득하고 이를 이용해서 모바일 인터페이스를 제어하는 기법을 제안한다. 카메라로부터 입력되는 영상에서 특징점을 추출하고 광흐름을 기반으로 각각의 특징점에 대한 움직임을 추정한다. 그 과정을 통해서 생성된 움직임 벡터의 집합으로부터 affine 행렬을 계산하여 전체 화상의 움직임을 표현하는 파라미터를 도출할 수 있다. 움직임 파라미터 값은 다시 인터페이스를 제어하는 신호를 생성하며 이 움직임 신호는 메뉴 네비게이션, 슬라이드 쇼 및 문서 스크롤과 같은 모바일 인터페이스의 제어에 이용될 수 있다. 모의 실험을 통하여 인터페이스 제어를 위한 화상의 움직임 정보가 적절히 획득됨을 확인한다.

  • PDF

Term Structure Estimation Using Official Rate

  • Rhee, Joon Hee;Kim, Yoon Tae
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.3
    • /
    • pp.655-663
    • /
    • 2003
  • The fundamental tenn structure model is based on the modelling of the short rate. It is well-known that the short rate depends on the interest rate policy of monetary authorities, especially on the official rate. Babbs and Webber(1994) modelled the tenn structure of interest rates using the official rate. They assume that the official rate follows a jump process. This reflects that the official rate infrequently changes. In this paper, we test this official tenn structure model and compare the jump-diffusion model with the pure diffusion model.

Real-time Stabilization Method for Video acquired by Unmanned Aerial Vehicle (무인 항공기 촬영 동영상을 위한 실시간 안정화 기법)

  • Cho, Hyun-Tae;Bae, Hyo-Chul;Kim, Min-Uk;Yoon, Kyoungro
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.1
    • /
    • pp.27-33
    • /
    • 2014
  • Video from unmanned aerial vehicle (UAV) is influenced by natural environments due to the light-weight UAV, specifically by winds. Thus UAV's shaking movements make the video shaking. Objective of this paper is making a stabilized video by removing shakiness of video acquired by UAV. Stabilizer estimates camera's motion from calculation of optical flow between two successive frames. Estimated camera's movements have intended movements as well as unintended movements of shaking. Unintended movements are eliminated by smoothing process. Experimental results showed that our proposed method performs almost as good as the other off-line based stabilizer. However estimation of camera's movements, i.e., calculation of optical flow, becomes a bottleneck to the real-time stabilization. To solve this problem, we make parallel stabilizer making average 30 frames per second of stabilized video. Our proposed method can be used for the video acquired by UAV and also for the shaking video from non-professional users. The proposed method can also be used in any other fields which require object tracking, or accurate image analysis/representation.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Antiblurry Dejitter Image Stabilization Method of Fuzzy Video for Driving Recorders

  • Xiong, Jing-Ying;Dai, Ming;Zhao, Chun-Lei;Wang, Ruo-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.3086-3103
    • /
    • 2017
  • Video images captured by vehicle cameras often contain blurry or dithering frames due to inadvertent motion from bumps in the road or by insufficient illumination during the morning or evening, which greatly reduces the perception of objects expression and recognition from the records. Therefore, a real-time electronic stabilization method to correct fuzzy video from driving recorders has been proposed. In the first stage of feature detection, a coarse-to-fine inspection policy and a scale nonlinear diffusion filter are proposed to provide more accurate keypoints. Second, a new antiblurry binary descriptor and a feature point selection strategy for unintentional estimation are proposed, which brought more discriminative power. In addition, a new evaluation criterion for affine region detectors is presented based on the percentage interval of repeatability. The experiments show that the proposed method exhibits improvement in detecting blurry corner points. Moreover, it improves the performance of the algorithm and guarantees high processing speed at the same time.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.