• Title/Summary/Keyword: 광류 추정 알고리즘

Search Result 11, Processing Time 0.031 seconds

Algorithm for Arbitrary Point Tracking using Pyramidal Optical Flow (피라미드 기반 광류 추정을 이용한 영상 내의 임의의 점 추적 알고리즘)

  • Lee, Jae-Kwang;Park, Chang-Joon
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1407-1416
    • /
    • 2007
  • This paper describes an algorithm for arbitrary point tracking using pyramidal optical flow. The optical flow is calculated based on the Lucas-Kanade's optical flow estimation in this paper. The image pyramid is employed to calculate a big motion while being sensitive to a small motion. Furthermore, a rectification process is proposed to reduce the error which is increased as it goes down to the lower level of the image pyramid. The accuracy of the optical flow estimation was increased by using some constraints and sub-pixel interpolation of the optical flow and this makes our algorithm to track points in which they do not have features such as edges or corners. The proposed algorithm is implemented and primary results are shown in this paper.

  • PDF

Attitudes Estimation for the Vision-based UAV using Optical Flow (광류를 이용한 영상기반 무인항공기의 자세 추정)

  • Jo, Seon-Yeong;Kim, Jong-Hun;Kim, Jung-Ho;Cho, Kyeum-Rae;Lee, Dae-Woo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.4
    • /
    • pp.342-351
    • /
    • 2010
  • UAV (Unmanned Aerial Vehicle) have an INS(Inertial Navigation System) equipment and also have an electro-optical Equipment for mission. This paper proposes the vision based attitude estimation algorithm using Kalman Filter and Optical flow for UAV. Optical flow is acquired from the movie of camera which is equipped on UAV and UAV's attitude is measured from optical flow. In this paper, Kalman Filter has been used for the settlement of the low reliability and estimation of UAV's attitude. Algorithm verification was performed through experiments. The experiment has been used rate table and real flight video. Then, this paper shows the verification result of UAV's attitude estimation algorithm. When the rate table was tested, the error was in 2 degree and the tendency was similar with AHRS measurement states. However, on the experiment of real flight movie, maximum yaw error was 21 degree and Maximum pitch error was 7.8 degree.

Applicability of Optical Flow Information for UAV Navigation under GNSS-denied Environment (위성항법 불용 환경에서의 무인비행체 항법을 위한 광류 정보 활용)

  • Kim, Dongmin;Kim, Taegyun;Jeaong, Hoijo;Suk, Jinyoung;Kim, Seungkeun;Kim, Younsil;Han, Sanghyuck
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.1
    • /
    • pp.16-27
    • /
    • 2020
  • This paper investigates the applicability of optical flow information for unmanned aerial vehicle (UAV) navigation under environments where global navigation satellite system (GNSS) is unavailable. Since the optical flow information is one of important measurements to estimate horizontal velocity and position, accuracy of the optical flow information must be guaranteed. So a navigation algorithm, which can estimate and cancel biases that the optical flow information may have, is suggested to improve the estimation performance. In order to apply and verify the proposed algorithm, an integrated simulation environment is built by designing a guidance, navigation, and control (GNC) system. Numerical simulations are implemented to analyze the navigation performance using this environment.

Vision-based Target Tracking for UAV and Relative Depth Estimation using Optical Flow (무인 항공기의 영상기반 목표물 추적과 광류를 이용한 상대깊이 추정)

  • Jo, Seon-Yeong;Kim, Jong-Hun;Kim, Jung-Ho;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.3
    • /
    • pp.267-274
    • /
    • 2009
  • Recently, UAVs (Unmanned Aerial Vehicles) are expected much as the Unmanned Systems for various missions. These missions are often based on the Vision System. Especially, missions such as surveillance and pursuit have a process which is carried on through the transmitted vision data from the UAV. In case of small UAVs, monocular vision is often used to consider weights and expenses. Research of missions performance using the monocular vision is continued but, actually, ground and target model have difference in distance from the UAV. So, 3D distance measurement is still incorrect. In this study, Mean-Shift Algorithm, Optical Flow and Subspace Method are posed to estimate the relative depth. Mean-Shift Algorithm is used for target tracking and determining Region of Interest (ROI). Optical Flow includes image motion information using pixel intensity. After that, Subspace Method computes the translation and rotation of image and estimates the relative depth. Finally, we present the results of this study using images obtained from the UAV experiments.

Obstacle Detection and Recognition System for Autonomous Driving Vehicle (자율주행차를 위한 장애물 탐지 및 인식 시스템)

  • Han, Ju-Chan;Koo, Bon-Cheol;Cheoi, Kyung-Joo
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.229-235
    • /
    • 2017
  • In recent years, research has been actively carried out to recognize and recognize objects based on a large amount of data. In this paper, we propose a system that extracts objects that are thought to be obstacles in road driving images and recognizes them by car, man, and motorcycle. The objects were extracted using Optical Flow in consideration of the direction and size of the moving objects. The extracted objects were recognized using Alexnet, one of CNN (Convolutional Neural Network) recognition models. For the experiment, various images on the road were collected and experimented with black box. The result of the experiment showed that the object extraction accuracy was 92% and the object recognition accuracy was 96%.

A Study on the Estimation of Smartphone Movement Distance using Optical Flow Technology on a Limited Screen (제한된 화면에 광류 기술을 적용한 스마트폰 이동 거리 추정에 관한 연구)

  • Jung, Keunyoung;Oh, Jongtaek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.4
    • /
    • pp.71-76
    • /
    • 2019
  • Research on indoor location tracking technology using smartphone is actively being carried out. Especially, the movement distance of the smartphone should be accurately measured and the movement route of the user should be displayed on the map. Location tracking technology using sensors mounted on smart phones has been used for a long time, but accuracy is not good enough to measure the moving distance of the user using only the sensor. Therefore, when the user moves the smartphone in a certain posture, it must research and develop an appropriate algorithm to measure the distance accurately. In this paper, we propose a method to reduce moving distance estimation error by removing user 's foot shape by limiting the screen of smartphone in pyramid - based optical flow estimation method.

Using play-back image sequence to detect a vehicle cutting in a line automatically (역방향 영상재생을 이용한 끼어들기 차량 자동추적)

  • Rheu, Jee-Hyung;Kim, Young-Mo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.95-101
    • /
    • 2014
  • This paper explains effective tracking method for a vehicle cutting in a line on the road automatically. The method employs KLT based on optical flow using play-back image sequence. Main contribution of this paper is play-back image sequence that is in order image frames for rewind direction from a reference point in time. The moment when recognizing camera can read a license plate very well can usually be the reference point in time. The biggest images of object traced can usually be obtained at this moment also. When optic flow is applied, the bigger image of the object traced can be obtained, the more feature points can be obtained. More many feature points bring good result of tracking object. After the recognizing cameras read a license plate on the vehicle suspected of cut-in-line violation, and then the system extracts the play-back image sequence from the tracking cameras for watching wide range. This paper compares using play-back image sequence as normal method for tracking to using play-forward image sequence as suggested method on the results of the experiment and also shows the suggested algorithm has a good performance that can be applied to the unmanned system for watching cut-in-line violation.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Visual Voice Activity Detection and Adaptive Threshold Estimation for Speech Recognition (음성인식기 성능 향상을 위한 영상기반 음성구간 검출 및 적응적 문턱값 추정)

  • Song, Taeyup;Lee, Kyungsun;Kim, Sung Soo;Lee, Jae-Won;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.321-327
    • /
    • 2015
  • In this paper, we propose an algorithm for achieving robust Visual Voice Activity Detection (VVAD) for enhanced speech recognition. In conventional VVAD algorithms, the motion of lip region is found by applying an optical flow or Chaos inspired measures for detecting visual speech frames. The optical flow-based VVAD is difficult to be adopted to driving scenarios due to its computational complexity. While invariant to illumination changes, Chaos theory based VVAD method is sensitive to motion translations caused by driver's head movements. The proposed Local Variance Histogram (LVH) is robust to the pixel intensity changes from both illumination change and translation change. Hence, for improved performance in environmental changes, we adopt the novel threshold estimation using total variance change. In the experimental results, the proposed VVAD algorithm achieves robustness in various driving situations.

Optical Flow Based Vehicle Counting and Speed Estimation in CCTV Videos (Optical Flow 기반 CCTV 영상에서의 차량 통행량 및 통행 속도 추정에 관한 연구)

  • Kim, Jihae;Shin, Dokyung;Kim, Jaekyung;Kwon, Cheolhee;Byun, Hyeran
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.448-461
    • /
    • 2017
  • This paper proposes a vehicle counting and speed estimation method for traffic situation analysis in road CCTV videos. The proposed method removes a distortion in the images using Inverse perspective Mapping, and obtains specific region for vehicle counting and speed estimation using lane detection algorithm. Then, we can obtain vehicle counting and speed estimation results from using optical flow at specific region. The proposed method achieves stable accuracy of 88.94% from several CCTV images by regional groups and it totally applied at 106,993 frames, about 3 hours video.