• Title/Summary/Keyword: Optical flow estimation

Search Result 122, Processing Time 0.025 seconds

Hand Gesture Interface Using Mobile Camera Devices (모바일 카메라 기기를 이용한 손 제스처 인터페이스)

  • Lee, Chan-Su;Chun, Sung-Yong;Sohn, Myoung-Gyu;Lee, Sang-Heon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.621-625
    • /
    • 2010
  • This paper presents a hand motion tracking method for hand gesture interface using a camera in mobile devices such as a smart phone and PDA. When a camera moves according to the hand gesture of the user, global optical flows are generated. Therefore, robust hand movement estimation is possible by considering dominant optical flow based on histogram analysis of the motion direction. A continuous hand gesture is segmented into unit gestures by motion state estimation using motion phase, which is determined by velocity and acceleration of the estimated hand motion. Feature vectors are extracted during movement states and hand gestures are recognized at the end state of each gesture. Support vector machine (SVM), k-nearest neighborhood classifier, and normal Bayes classifier are used for classification. SVM shows 82% recognition rate for 14 hand gestures.

Passenger Monitoring Method using Optical Flow and Difference Image (차영상과 Optical Flow를 이용한 지하철 승객 감시 방법)

  • Lee, Woo-Seok;Kim, Hyoung-Hoon;Cho, Yong-Gee
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.1966-1972
    • /
    • 2011
  • Optical flow estimation based on multi constraint approaches is frequently used for recognition of moving objects. This paper proposed the method to monitor passenger boarding using image processing when a train is operated based on Automatic Train Operation(ATO). The movement of passenger can be detected to compare two images, one is a basic image and another is immediately captured by CCTV. Optical Flow helps to find the movement of passenger when two images are compared. The movement of passenger is one of important informations for ATO system because it needs to decide door status.

  • PDF

A Motion Vector Recovery Method based on Optical Flow for Temporal Error Concealment in the H.264 Standard (H.264에서 에러은닉을 위한 OPtical Flow기반의 움직임벡터 복원 기법)

  • Kim, Dong-Hyung;Jeong, Je-Chang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.2C
    • /
    • pp.148-155
    • /
    • 2006
  • For the improvement of coding efficiency, the H.264 standard uses new coding tools which are not used in previous coding standards. Among new coding tools, motion estimation using smaller block sizes leads to higher correlation between the motion vectors of neighboring blocks. This characteristic of H.264 is useful for the motion vector recovery. In this paper, we propose the motion vector recovery method based on optical flow. Since the proposed method estimates the optical flow velocity vector from more accurate initial value and optical flow region is limited to 16$\times$16 block size, we can alleviate the complexity of computation of optical flow velocity. Simulation results show that our proposed method gives higher objective and subjective video quality than previous methods.

Motion Parameter Estimation and Segmentation with Probabilistic Clustering (활률적 클러스터링에 의한 움직임 파라미터 추정과 세그맨테이션)

  • 정차근
    • Journal of Broadcast Engineering
    • /
    • v.3 no.1
    • /
    • pp.50-60
    • /
    • 1998
  • This paper addresses a problem of extraction of parameteric motion estimation and structural motion segmentation for compact image sequence representation and object-based generic video coding. In order to extract meaningful motion structure from image sequences, a direct parameteric motion estimation based on a pre-segmentation is proposed. The pre-segmentation which considers the motion of the moving objects is canied out based on probabilistic clustering with mixture models using optical flow and image intensities. Parametric motion segmentation can be obtained by iterated estimation of motion model parameters and region reassignment according to a criterion using Gauss-Newton iterative optimization algorithm. The efficiency of the proposed methoo is verified with computer simulation using elF real image sequences.

  • PDF

A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation (다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM)

  • Geunhyeong Park;HyungGi Jo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

Localization with Two Optical Flow Sensors for Small Unmanned Ground Vehicles (두 개의 광류센서를 이용한 소형무인로봇의 위치 추정 기술)

  • Huh, Jinwook;Kang, Sincheon;Hyun, Dongjun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.16 no.2
    • /
    • pp.95-100
    • /
    • 2013
  • Localization is very important for the autonomous navigation of Unmanned Ground Vehicles; however, it is difficult that they have a precise Inertial Navigation System(INS) sensor, especially Small Unmanned Ground Vehicle(SUGV). Moreover, there are some condition such as denial of global position system(GPS), GPS/INS integrated system is not robust. This paper proposes the estimation algorithm with optical flow sensor and INS. Being compared with previous researches, the proposed algorithm is suitable for skid steering vehicles. We revised the measurement model of previous research for the accuracy of side direction position. Experimental results were performed to verify the algorithm, and the result showed an excellent performance.

3D Motion Estimation Using Optical Flow (Optical Flow를 이용한 3차원 운동 정보에 관한 연구)

  • 조혜리;이경무;이상욱
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.845-848
    • /
    • 2000
  • 운동(motion) 벡터는 보고 있는 카메라와 관측되는 대상물 사이의 상대적인 움직임에 의해서 발생되는 3차원 물체의 속도가 2차원 영상에 투사되어 맺히는 영상에서의 2차원 속도 벡터를 가리킨다 영상에서 물체의 움직임은 3차원 공간상의 운동을 알 수 있는 중요한 정보로써 물체를 추적하는데 응용되고 있다. 본 논문에서는 여러 장의 연속적인 2차원 밝기 영상으로부터 카메라의 움직임을 추정하는 문제를 다룬다. 기존의 특징 기반 추적 기법에서는 저 단계의 영상 처리 과정에서 모델과 배경의 특징점이 서로 분리되지 않거나, 모델의 특징(feature)이 소실되었을 경우, 추적이 용이하지 못하고, 카메라와 3차원 물체의 병진과 회전 운동에 의해 발생된 움직임의 경우 3차원 표적 특징이 많이 사라져서 오차가 많이 누적되기도 한다. 본 논문에서는 이러한 문제를 해결하기 위하여 목표물 및 배경 특징들을 사용하여 카메라의 운동 정보를 찾아내는 기법을 제안한다. 제안하는 3차원 카메라의 운동 정보 추정 기법은 크게 두 장의 연속된 영상으로부터 3차원 모델과 배경의 많은 특징들에 대한 광류(optical flow) 검색 과정과, 이로부터 취득한 움직임 벡터와 카메라의 비선형 운동 방정식과 Lagrange multiplier를 통한 카메라의 운동 정보 추정 과정으로 구성된다.

  • PDF

Offline Camera Movement Tracking from Video Sequences

  • Dewi, Primastuti;Choi, Yeon-Seok;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.69-72
    • /
    • 2011
  • In this paper, we propose a method to track the movement of camera from the video sequences. This method is useful for video analysis and can be applied as pre-processing step in some application such as video stabilizer and marker-less augmented reality. First, we extract the features in each frame using corner point detection. The features in current frame are then compared with the features in the adjacent frames to calculate the optical flow which represents the relative movement of the camera. The optical flow is then analyzed to obtain camera movement parameter. The final step is camera movement estimation and correction to increase the accuracy. The method performance is verified by generating a 3D map of camera movement and embedding 3D object to the video. The demonstrated examples in this paper show that this method has a high accuracy and rarely produce any jitter.

  • PDF

Mobile Robot Localization using Ceiling Landmark Positions and Edge Pixel Movement Vectors (천정부착 랜드마크 위치와 에지 화소의 이동벡터 정보에 의한 이동로봇 위치 인식)

  • Chen, Hong-Xin;Adhikari, Shyam Prasad;Kim, Sung-Woo;Kim, Hyong-Suk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.368-373
    • /
    • 2010
  • A new indoor mobile robot localization method is presented. Robot recognizes well designed single color landmarks on the ceiling by vision system, as reference to compute its precise position. The proposed likelihood prediction based method enables the robot to estimate its position based only on the orientation of landmark.The use of single color landmarks helps to reduce the complexity of the landmark structure and makes it easily detectable. Edge based optical flow is further used to compensate for some landmark recognition error. This technique is applicable for navigation in an unlimited sized indoor space. Prediction scheme and localization algorithm are proposed, and edge based optical flow and data fusing are presented. Experimental results show that the proposed method provides accurate estimation of the robot position with a localization error within a range of 5 cm and directional error less than 4 degrees.

Motion Field Estimation Using U-disparity Map and Forward-Backward Error Removal in Vehicle Environment (U-시차 지도와 정/역방향 에러 제거를 통한 자동차 환경에서의 모션 필드 예측)

  • Seo, Seungwoo;Lee, Gyucheol;Lee, Sangyong;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2343-2352
    • /
    • 2015
  • In this paper, we propose novel motion field estimation method using U-disparity map and forward-backward error removal in vehicles environment. Generally, in an image obtained from a camera attached in a vehicle, a motion vector occurs according to the movement of the vehicle. but this motion vector is less accurate by effect of surrounding environment. In particular, it is difficult to extract an accurate motion vector because of adjacent pixels which are similar each other on the road surface. Therefore, proposed method removes road surface by using U-disparity map and performs optical flow about remaining portion. forward-backward error removal method is used to improve the accuracy of the motion vector. Finally, we predict motion of the vehicle by applying RANSAC(RANdom SAmple Consensus) from acquired motion vector and then generate motion field. Through experimental results, we show that the proposed algorithm performs better than old schemes.